I was in a situation where I found my software was being used to track people in Iraq. So…. Yeah, I left that gig. My payment back is to write Free Software. Pay your penance with Free Software my friend. Also don’t feel bad about not liking Capitalism. Capitalism is terrible which is why most of the people on this planet hate it.
My payment back is to write Free Software.
If you write free open-source software, you still have no control over whether it gets used for bad purposes.
Capitalism is terrible which is why most of the people on this planet hate it.
Capitalism was actually decent in the 1950s and ‘60s. (This is why Trump’s message of “Make America Great Again”, as much as I can’t stomach the disrespect to the minorities for whom that period wasn’t so great, resonated with so many people.) We had 4-6 percent real GDP growth and companies took care of their people. We had low economic inequality and no one would pull the kind of shit that you see today on a regular basis.
In the ‘60s, getting let go by your company meant that the CEO took you out to dinner, explained that you were not getting the next promotion, and that you had a year or so before the accountants would expect him to fire you. (“At absolute most, I can keep you on for two years, so I want you to take your search seriously.”) He’d offer you his Rolodex and give you an excellent reference for where-ever you wanted to go next. In the worst-case scenario where he couldn’t get you hired, he called up his friend at an MBA or PhD program of your choice and got you in. That’s what getting fired was.
That’s obviously not how it works today. Even people who don’t get fired get treated like garbage.
I agree that the 21st-century style of corporate/managerial/vampire capitalism is a disaster that must be overthrown, preferably nonviolently, but through whatever means are necessary to get the job done.
I don’t think capitalism is an innately terrible system. I do think that, like Soviet communism, it worked for a time and then failed. Communism managed to turn a frozen backwater into a world superpower from 1917-50. The implementation was morally reprehensible (Stalin) and the system began to collapse by the 1980s, but it worked for a time. Likewise, organizational/corporate capitalism worked for about sixty years (1914-73) and remained semi-functional for another 28 years in the West Coast tech industry, but is now in such a state where it needs to be replaced with something else… but I have no clue how one goes about that.
In the ‘60s, getting let go by your company meant that the CEO took you out to dinner, explained that you were not getting the next promotion…
I call out hagiography. Things were generally better for many people (and as you noted, worse for others) but there’s never been a golden era of corporate beneficence for most people.
Corporations were often bad for the environment and sometimes to customers, but they used to be good to their own people. That’s the difference.
Companies did some bad things, for sure, but once you were in, you were guaranteed support and they’d bend over backward not to hurt your career. You could go into the CEO’s office on a random Tuesday and ask to be his protege and he’d say “Yes; what department do you want to lead?” (You might have had to work until a solid 4:00, and occasionally even 5:30, to complete the workload and training that comes from being fast-tracked. But those are the sacrifices one makes.) Once they got in, it really was a lot easier for the Boomers, which is why they’re able to pay $7 million for 3BR houses.
The 1957 objection was “E Corp is polluting local rivers and overcharges its customers.” I don’t intend to diminish those objections. We needed the consumer and environmental protections for a reason. The 2017 objection is “G Corp uses stack ranking to disguise layoffs as firings and thereby destroys the reputations of departing employees to preserve its own.” So, these days, companies are bad for the external world and, additionally, evil toward their own people.
The Boomer hippie movement was literally a revolt against having to show up at a place 5 times per week for at least half a day (three-martini lunches made afternoon attendance optional) and having to wait a solid 3 years (!) before getting the VP-level job where you can fly business class and expense it. Meanwhile, as for Millennials… if it weren’t for World of Warcraft, there’d be a civil war by now.
I thought OP was about making the world as a whole worse, without distinguishing between customers and employees. Is your notion of “vampire capitalism” only about treating employees poorly?
They’re related. If companies treat their workers better, then other companies have to follow suit in order to compete.
For example, Henry Ford (who was not a nice person, but knew a good play when he saw it) doubled the wage of his factory workers, to $5 per day, in 1914. (That’d be about $120 per day in 2017.) Historians note that he did this in order to have buyers for his products, but it wasn’t just the first-order effect that he was after, because that wouldn’t justify the cost. He knew that his doing so would raise wages across the board, and increase his buyer base nationally. It worked.
Employers can be better in all sorts of ways. They can pay more, treat workers better, or treat the environment better. It’s all connected. Right now, we have an environment where employers hold all the cards and don’t have to do jack shit for anyone. They don’t compete to be better; they just pay their executives as much as they can get away with. We need to reverse that. It’s a moral imperative.
The 50s and 60s existed in that form because of massive government spending through the Great Depression and WWII, creating a massive, socialized infrastructure that permitted the growth of business. Private sector factories scaled up for wartime production, thanks to government investment, and then turned that excess production capacity to consumer goods (and along the way, we had to invent entirely new “needs” for consumers to fill, planned obsolescence, etc. because we had briefly hit a post-scarcity level of production for the current levels of population). Air travel was heavily regulated, utilities were heavily regulated, telecoms were heavily regulated.
If anything, the 50s and 60s are a sign that markets work best when they are heavily managed by the state.
This is absolutely correct.
You need a strong public sector to keep the private sector honest. Even if you’re a money-hungry capitalist who would never work in a government job, you should still care about what government jobs exist, because that will heavily influence the wages and conditions that are available to you.
For example, when research jobs are easy to get because of ample public funding, the private world has to compete for talent. You get Bell Labs and Xerox PARC. When the research job market is in the shitter and has been for over 30 years, you get business-driven development and “Agile” shovelware.
Capitalism was actually decent in the 1950s and ‘60s. (This is why Trump’s message of “Make America Great Again”, as much as I can’t stomach the disrespect to the minorities for whom that period wasn’t so great, resonated with so many people.) We had 4-6 percent real GDP growth and companies took care of their people. We had low economic inequality and no one would pull the kind of shit that you see today on a regular basis.
…iff you were a white dude.
Though you did mention that minorites didn’t have it so great, though that’s kind of an understatement :)
It wasn’t capitalism’s fault that minorities had it bad in the 1950s and ‘60s. Capitalism is not the only cause of human awfulness, as Jews persecuted by Communist Russia– and, of course, black Americans who suffered in pre-capitalistic slavery– can attest. The extreme racism that afflicted, and continues to afflict, our society runs deeper than our economic system.
Society is better in 2017 than it was in 1960, insofar as we’ve made a lot of progress toward racial and gender equality. Capitalism itself is a lot worse.
I know it’s not your main point, but what part of treating people as owned assets as in slavery in the Americas is pre-capitalistic?
People with real expertise would probably disagree or put it better, but:
We’ve had slavery in the America’s since about 1500. Neither England, France or Spain were a capitalistic society until sometime in the 18th or 19th century. They were feudal societies that later turned into mercantile societies.
My understanding is that the difference has to do with who can participate in markets, as well as whether there are markets. In feudal societies everything is down to the king. The king decides whether there are markets and what type of people may participate. The king issues charters to companies to allow people to act collectively. Colonial America was quasi-governmental, even if they called it “the Virginia Company”. It also took a royal writ to create.
As the industrial revolution went on we got to recognizable modern capitalism, but there was a mercantile stop off, where trade is recognized as good, but only within a country. External trade was viewed as harmful. The government had a much larger role in defining who could participate and what markets were allowed than it does today.
My understanding is a little shaky, though I’ll admit
As to your point about slaves, they are capital, and very expensive capital at that. I’m not sure I’d call any slave owning society capitalist though, because even with massive inequalities in a capitalist society, the lowest can still own things and trade things. Slaves can’t do that. Everything a slave has belongs to their master, everything a slave creates belongs to their master. A slave’s offspring belongs to their master. A slave’s ability to have offspring belongs to their master
It depends on where you were. Contrary to what’s portrayed in Hidden Figures, NASA was never segregated. Mad Men makes 1960s office life look terrible and sexist, but advertising in 1965 was analogous to investment banking in 2007: a macho career with long hours and a lot of unsavory characters in it, that people only did because you could be a millionaire before 30 if you played your cards right (and stole a few clients, a la Season 3).
Professional life, if you could get an office job, was a lot better in the ‘50 and '60s than it is today. You were a trusted professional, not a suspect held under constant surveillance and expected to show daily progress according to bullshit metrics (“story points”).
It was, unfortunately, astronomically harder for women and minorities to get into professional life in the first place, and of course they had to deal with all kinds of other garbage (lynching, poll taxes) that arguably makes the decline in office conditions trivial by comparison. Your boss might be more likely to be a decent human being, but if psychopaths are burning crosses in your neighborhood, there’s not much comfort there.
No one with an understanding of history can say with a clear conscience that the 1950s-60s were better. They weren’t. However, some things were better. Economic growth was 5% per year instead of 2% per year, economic inequality was nothing like what exists today, and once you were inside a corporation, you were treated with a lot more respect than is typical these days.
At least post some statistics before you do the lazy “hurr durr white cis men oppressing everybody amirite”.
Please point to any data or conclusion presented in the link you posted that says anything but, “hurr durr white cis men oppressing everybody amirite”. The claim was that economic inequality was lower in the 50s and 60s, but that is true only when comparing across uniform demographics.
The reading I had of the quote you were replying to was that capitalism was “actually decent”, in that it was providing returns for a broader section of people. If you look at the data, the 50s-60s clearly didn’t solely benefit white dudes–women’s wages went up, men’s wages went up, blacks and whites both made more money than they used to.
Everybody’s wages went up. You suggested “if and only if”, and you’re wrong, as shown by data.
The period after the 70s clearly had a more unequal tenor to it. The “Return to Stagnation in Relative Income” summarizes it nicely:
The years from 1979 to 1989 saw the return of stagnation in black relative incomes. Part of this stagnation may reflect the reversal of the shifts in wage distribution that occurred during the 1940s. In the late 1970s and especially in the 1980s, the US wage distribution grew more unequal. Individuals with less education, particularly those with no college education, saw their pay decline relative to the better-educated. Workers in blue-collar manufacturing jobs were particularly hard hit. The concentration of black workers, especially black men, in these categories meant that their pay suffered relative to that of whites. Another possible factor in the stagnation of black relative pay in the 1980s was weakened enforcement of antidiscrimination policies at this time.
~
You made a cutesy little comment and are just being told “Hey, it’s more complicated than you’re representing”. Try to elevate the level of discourse instead of playing to the crowds.
As I clarified, I was referring to the statement that economic inequality was low, which is true if and only if you are comparing across uniform demographics. I made no statements regarding increased wages across the board.
Try to argue to the point instead of feeling butt-hurt that someone suggested there are structural inequalities that benefited white men more than anyone else.
However, economic inequality was lower in the 1950s to ‘60s. It’s a well-studied fact. You can debate social inequality, which is subjective and qualitative to a large degree, but economic inequality is numerical. Measured by the Gini coefficient, we’re at a level of inequality that we haven’t seen since the 1920s. Look here for some data on it. For example, in 1964, the 0.1% had 2% of the national gross income, whereas now it’s 8.8% (or 88x an equal share).
The only measure on which we seem to be doing better is the poverty rate, but that’s largely because the official poverty line is rarely moved (it would be politically disadvantageous, just as including discouraged workers and the prison population in unemployment statistics would make this country look like a shitshow).
As I clarified, I was referring to the statement that economic inequality was low, which is true if and only if you are comparing across uniform demographics. I made no statements regarding increased wages across the board.
Your original post–the one I replied to–lumped in several distinct statements.
Try to argue to the point instead of feeling butt-hurt that someone suggested there are structural inequalities that benefited white men more than anyone else.
I did–you’re the one who has failed to bring any evidence into this. I’m not “butt-hurt” that you are asserting white males enjoy a structural advantage: I’m annoyed that you didn’t substantiate your point.
There are lots of structural inequalities that you should be able to point to–use some numbers, reference some papers.
What I think we have here is a case of the body snatchers. Will the real Michael please stand up?
You know full well that when you say something was good at any time you must qualify with ‘for whom’. I implied that for the majority of people capitalism was NEVER good. Yes, was it good for millions of white American men in the 50s? Sure. My message does not contest this.
Not sure why you want to defend Capitalism for that tiny minority of people. You also do realize critics of Capitalism called the Soviet Union a state capitalist system. To the average worker , working at an America firm is indistinguishable from a Soviet one in all the important ways.
Now in regards to Free Software, I didn’t say it was a big penance. Thinking more about it I would say Venture Communism as stated by Dmytri Kleiner is a better approach.
Whether industrial capitalism is good is somewhat of a value judgment. I think we can agree that it was morally better to the slave-powered economy and that preceded it. I would also argue that, for a time, it worked. We had 4-6 percent annual economic growth (3 times more than is normal today) in the 1940s-70s. Whatever one thinks of an economic system, the fact is that it did produce wealth. I think it’s also clear that capitalism is ceasing to work well and that we may have to move to something else. Certainly, it will look more like welfare-state socialism as see in Scandinavia than the psychotic, mean-spirited capitalism of the U.S. circa 2017.
At my core, I’m a pragmatist. There was a time when industrial capitalism, despite its flaws, worked very well. It is now working very poorly and probably needs to be replaced.
I would just like to clarify your value judgement here. When you say it “worked” in the 1940s-1970s you are specifically talking about the USA and white men right? You are certainly not talking about the child laborers that mined the ore for minerals in Africa? The important thing to keep in mind is that all systems have global consequence now and even then. Also important to note is that there are many different modes of production happening at the same time. While there was some capitalism in th 1940s-70s USA, there was also some socialism, some communism, some welfare state. They worked at various levels and fed off one another. The reason the capitalism computer explosion happened in the 80s was built on huge government funded research and expenditure in prior decades.
So there is no totality of Capitalism then and even now. It was a interplay of many many systems. There is no ROOT cause for why “capitalism” worked then and doesn’t work now. We live in a complex adaptive system.
When you say it “worked” in the 1940s-1970s you are specifically talking about the USA and white men right?
More than that. The US prospered, but US prosperity allowed us to rebuild Europe and Japan (Marshall Plan) and make the world more peaceful in general.
The reason Eastern Europe is poor and Western Europe is rich is the Marshall Plan. If the US hadn’t rebuilt W. Europe and Japan, they’d still be poor. We did this because the punitive handling of Germany after World War I led to Hitler and we didn’t want to see that again. After that mistake, we realized that our enemies were governments, not countries, and that rebuilding our formal adversaries was a way to prevent them from going bad in the future.
I don’t love capitalism. Pure capitalism is atrocious. However, I think it’s important to pay attention to what works and what doesn’t. Restrained capitalism seems to be better than command-economy socialism, as the latter has failed every time.
While there was some capitalism in th 1940s-70s USA, there was also some socialism, some communism, some welfare state.
Absolutely. Pure capitalism is bad, no question. Capitalism when tempered with 30-50% socialism (and the proportion of socialism that we’ll need will increase, as technological unemployment mounts) is much more humane and also works better. Pure capitalism will never fund the R&D that you absolutely need if you want to get back to 4-6% economic growth instead of the shitty 1-2% we’ve got going on now.
The reason the capitalism computer explosion happened in the 80s was built on huge government funded research and expenditure in prior decades.
100 percent correct. And the lack of research funding (and the attendant three decades of low economic growth) is the main reason why this country is going into decline.
So there is no totality of Capitalism then and even now. It was a interplay of many many systems. There is no ROOT cause for why “capitalism” worked then and doesn’t work now.
I can agree with that.
Restrained capitalism seems to be better than command-economy socialism, as the latter has failed every time.
I think I found the source of our disagreement. It is often that people use the term capitalism to mean free market and private ownership. I mean it as a select few owning the means of production. The way Marx meant it. To me capitalism IS command and control.
If you look at most capitalist corporations, they are command and control. Tiny fiefdoms. I suspect when you say capitalism you mean private ownership and markets.
I guess I am using the term as it meant by the people who coined it. A derogatory term to hark back to feudal times.
In other words as you move from the spectrum of Capitalism->Socialism->Communism you move from command and control to democratic to social ownership.
This is why it is perfectly sane to me to call the Soviet Union state capitalism. Because the social relations are capitalist in nature (in other words, few own the means of production).
When you think communism you think command and control. When I think communism, I think everyone owning their production. When you think capitalism, you think markets and private ownership. When I think capitalism, I think command and control.
In other words, if you take the idea of capital ownership to the extreme, where everyone is their own company and own their own production, that’s communism to me. Oh and of course communism doesn’t even have money, because money is another command and control tool created by the state. And communism is stateless.
I’m glad to see this. 18-F seems like a great program.
As I get older, I’m more interested in mission-driven organizations like government agencies and non-profits than in for-profit enterprises, because I’ve recognized that we are unlikely to see any regard for technical excellence in the for-profit world in lieu of heavy regulation or a cultural shift (de-Boomerization) that will take a long time. Bell Labs isn’t coming back, nor is Xerox Parc, so the idea that you can have your cake and eat it too (Google salaries, Bell Labs culture and technical excellence) is out, at least for now. The cost cutters have won the cultural fight in the corporate world and their victory (our defeat, for those invested in the ideal of technical excellence) will not be reversed.
When there is an important mission, people will work 60 hours per week to get it done, but they almost never need to do so. Often, it’s force of habit. Adult project management (which is the antithesis of what you see in SillyCon Valley) is all about redundancy. So the truth is that a well-engineered organization would probably function just fine if people worked 10 hours per week; of course, the people involved in such organizations are usually highly motivated and therefore not inclined to slack, even if slack is built into the system.
The problem in the private world is that even if you can achieve the mission (to the extent that there is one) in a 5 hour work week, you do everything you can to hide that fact. Why? Because there are people whose job it is to cut costs and squeeze people, and even if they seem friendly and concerned with others' careers, they’re not. So it’s adversarial by nature. This leads everyone to put noise in the system and DoS the cost-cutters with apparent productivity (that might be meaningless) so that no one really knows what’s going on.
I’d imagine that when you have a mission-driven organization like 18-F, you have to deprogram the people (as described in the OP) who come from the private sector, because while in a corporate environment, you have to behave adversarially (not maliciously, but with self-preservation as an unambiguous top priority) just to survive, a mission-driven organization really needs everyone to be on the same team.
Adult project management is all about redundancy.
I used to work on what I felt was an important mission where much of the work was time-sensitive and this is something that I felt very acutely. If you, as project manager, provide resources adequate for the projected median level of demand AND your projections and everything else are right, congratulations! Your team will be under-resourced ~50% of the time (assuming A Few Reasonable Things about the probability distribution of workload).
Redundancy or resiliency necessarily involves having some capacity that goes unused some of the time so that you can deal with times of higher workload. Cost-cutting is all about getting rid of excess, healthy or not; people making these decisions… might not know or care about the variability of the workload. Or the quality of technical work. Or particularly care about the people doing the work.
It was a great mission but intensely frustrating that our management were blind to the risks threatening the customer relationships, productivity, and integrity of our team, especially given that what we were helping our customers do was… manage risk.
The problem in the private world is that even if you can achieve the mission (to the extent that there is one) in a 5 hour work week, you do everything you can to hide that fact. Why? Because there are people whose job it is to cut costs and squeeze people, and even if they seem friendly and concerned with others' careers, they’re not. So it’s adversarial by nature. This leads everyone to put noise in the system and DoS the cost-cutters with apparent productivity (that might be meaningless) so that no one really knows what’s going on.
Can I just say, I <3 this analysis.
Now not sure how I feel about that analysis, mainly because… A place where people obfuscate how much they are actually doing to protect themselves from cost-cutting is exhibiting at least one profoundly unhealthy social phenomenon.
[Comment removed by author]
The company I work for (MITRE) is mission-driven and financially viable, but it’s a weird class of business. MITRE is a non-profit that operates Federally Funded Research and Development Centers, supporting different parts of the federal government in a ton of different things. Basically, MITRE becomes embedded in the sponsoring organization, providing whatever technical research, development, and support they need. We don’t make products (we’re actually legally barred from making products), and our sole interest is in serving the needs of the sponsor and the interests of the American people.
Can more mission-driven organizations be financially viable in the long term?
It’s hard to say.
Large corporations used to fill that role, but (a) the bad people drove out the good and (b) the audit cycle became quicker (high-frequency politics) and therefore private-sector R&D is pretty much dead.
Small companies can remain that way if they make it a priority and eschew venture funding until they can take VC while making absolutely no cultural concessions.
It’s an old way of doing business, though, to care about what you are building and what your product or service actually does. It’s an aristocratic mentality that’s viewed as out-of-place by today’s private equity/techie pirates.
Was going to write a comment asking @DRMacIver if he intended to combine coverage-driven analysis with the property-based testing of Hypothesis and then scrolled down to see the “glass box testing” section.
I think that unguided fuzzing will inherently require more time than property-based testing. Have you considered having a semi-guided mode? If you can find a roughly robust way to partition inputs by the program flow that they cause AND infer additional input constraints from those partitions (both of these aren’t easy, I understand) a semi-guided tool sounds great– “hmm, spend the next 5 minutes just doing inputs that produce traces at least this long”.
Is there prior art in partitioning execution traces by similarity or inferring properties from input space partitions?
“Guidance” as in human eyes looking at some representation of intermediate fuzzing results and human input specifying to the fuzzer things like, “spend your time only on inputs (that produce traces) like these” or “omit inputs like these; the crash is (upon my insightful human inspection) trivial”.
The linked email chain is a bit interesting. Basically, nobody involved in this looks good, and we will never probably get the truth of things.
At least seven different people have made credible sexual harassment allegations against Jacob Appelbaum at this point. I don’t know what more evidence you need.
“Jacob actually did wrong people” would seem to be the most parsimonious theory for explaining the facts. Every explanation that doesn’t include that requires a conspiracy of some sort, not that that rules out the possibility.
Credible allegations are, regrettably, not proof.
I think the guy’s a scumbag, but procedure matters.
There are levels of epistemic status between “provable in court” and “as plausible as its opposite”. These levels of support for different theories are actually critical for making decisions about who we associate with, who we trust, and to what extent we trust them. Think about the level of proof that you have for each of the individual bits of information that are gathered about a person in a job interview. There are lots of alternative explanations for many things that a candidate might do, but usually you only care about the most likely one-and-a-half explanations.
This is my argument for why I think it’s reasonable for people to publicly disassociate from someone even if there’s been no, e.g., trial of fact in a court of law.
Fair point, but I’ll also point out that the tendency these days is to immediately blow up into mobs–so, for me at least–the standard of evidence has to be higher than “it’s simpler than a conspiracy and people are saying it”.
There is very much a category for “there’s probably something going on, nobody is going to tell the whole truth, and it doesn’t really affect me–let’s ignore it.”
Great idea… but.
Isn’t the survey more like “would you gamble part of your MTurk earnings on the trained model being successful?”
Will requesters benefit from using such a model? Who would monitor machine learning system usage and administer the royalty streams?
In this domain especially, labor doesn’t seem to hold enough leverage to make this happen unless requesters want it to happen and want to administer royalties.
My DoD skepticism level: unless there are specific contractors, technology offices, or project offices who care about this, i.e. think it will give them a competitive advantage or make good things happen for them faster, nothing will come of this. A cultural change like this would require a lot of clout and/or an impressive record of success behind it; no visible projects already underway is not a good sign for building a record of success.
The most overfunded and most euphemistically named institution on the planet needs help from the OSS community. Makes me wanna hurl.
But thanks for sharing - it is interesting to know that this exists. I hope it’s gonna be an epic fail.
It doesn’t help anything to be so negative, and it can even discourage people from sticking their neck out for bigger changes if even little things like this get shit on and ripped apart. I’m not a fan of the DoD (or the government in general), but I’d rather they do this than keep all of their code private. We’re paying for it, we should have access to it. It’s a small step, but it’s better than nothing.
Also, I think you missed the point. I don’t think they’re really “asking for help”; they’re just making the code available on GitHub. People can submit issues and pull requests if they want, but they can also just grab it and go use it for their own purposes.
It doesn’t help anything to be so negative
I am not negative. I am voicing my honest feelings about this. I am absolutely terrified about what is going on in the U.S. at the moment. No objectiveness or positivity intended - raw feels. I am not a citizen of the U.S. I am watching from the outside and … I have to repeat: I am terrified.
Just to summarize: the biggest army in the world by far is about to up their budget by 53 billion. Do you have any idea how mind boggling this is? WWIII anyone? So please forgive me if I have trouble to rustle up positivity when I read anything about the US military complex.
Also, I think you missed the point. I don’t think they’re really “asking for help”; they’re just making the code available on GitHub.
No. I think you’re missing the point.
Call to action
In true open source fashion, DDS is hosting an open call to developers, lawyers, and other members of the open source and free software communities across the government and private industry to comment and review a draft open source agreement that is currently available on Code.mil. The agreement will outline the terms of use and participation, and will be finalized by the end of March. The draft can be found at https://github.com/deptofdefense/code.mil/blob/master/LICENSE-agreement.md.
Who will pay me if I participate in this process? Right. Nobody.
I am awfully, awfully sorry. But I feel very strongly at the moment, that the US military complex is sucking up too much energy already. I think it would be good for all of us if they would get less attention, less money, less anything.
I never get why they create/announce these things without at least one project for the public to get an idea of what will typically be there. I’d like to know if this will be web stuff, out of date cobol code from the 80s, embedded code or something like maths/geometry/mapping software and libraries.
The military is the ultimate large bureaucracy… extrapolating from smaller ones, I can imagine that it’s hard to coordinate two separate efforts like that.
Agreed. It’s pretty damn anticlimactic to announce that you’re going to “do open source” and start with a call for help to nail down license details.
The best case for them is that they’re already asking some internal and external lawyers, contractors, project leaders, and project owners to weigh in on the license, but want the process to be visible; worst case is they don’t have the bandwidth to identify people to ask and ask them and are “We’re on GitHub!”-ing through it.
Rebuilding my personal infrastructure (at home and hosted) using Ansible. I’m creating all of my own playbooks, roles, etc. so it’s going slowly but once it’s done my management workload should drop appreciably. I currently have far too many snowflake servers but that’ll soon be a thing of the past.
What level of home/personal severs do you have set up that Ansible has become desirable for them?
Also, what do you think about Ansible vs stuff like Puppet or Chef?
I don’t have as many as my post possibly suggested - maybe ~30 servers across a mix of physical and virtual (and a mix of operating systems - mostly Debian, but also OpenBSD, NetBSD [soon to be decommissioned] and FreeBSD). I also have a few Windows Server VMs.
I host all of my own Internet services - DNS, mail (SMTP and IMAP), web sites for myself and family members, VPN, etc - as well as having lots of services at home (firewall, DNS, NTP, DHCP, Samba, web proxy, backups, home automation system, etc) and maintenance is becoming more and more onerous. I want to get rid of stuff like having to ssh into 10 servers to install the same package on each. Yes, there are things like Cluster SSH but I want to do things “properly” (and logging in via ssh and doing things manually doesn’t fall into that category in my book).
I don’t have any hands-on experience with Chef but I’ve used Puppet lightly in the past. I’m no Puppet expert so any comparison between it and Ansible wouldn’t do either of them any favours. I’ll just say that I rather like Ansible - the fact that it’s agentless is great and the relative simplicity is a plus in my book. Also, my Python skills are far better than my (rather rusty) Ruby skills. Sadly, I have no bow hunting skills.
20-30 servers definitely seems like enough to start looking into tools to make managing them more pleasant, especially since it looks like you run enough services to host a small business.
As someone that has a minor ops interest (I run a server for a blog and some version control stuff, and have been doing it the hacky-manual way that one get away with when you only have a single server), Ansible does look interesting.
There’s benefit to using a configuration management tool even if you only have a single public-facing server. Reproduceable deploys, everything in version control, etc. I still look after a few WordPress blogs for family members that are not yet managed in my own VCS repo and they’re a nightmare to look after.
As part of the Ansible deployment I’ll be cleaning up all of that. Also, doing things like migrating an Apache installation to nginx, deploying DNSSEC for my domains (and all that goes with that, including DANE), deploying Lets Encrypt for a few sites, etc. All in all, quite a lot of work ahead of me!
True enough. I don’t think how I’ve done it is ideal, it’s just where I am after some years of it being a minor side project. I use it to host https://junglecoder.com and I’ve used that server as an excuse to learn Go, Bash and a few other things. If I can find the time/energy, I may put Ansible or something like that next on the list.
Side projects are great for learning and it’s perfectly understandable that, as they grow organically, maintenance becomes more and more of an overhead. That’s what’s happened with me - I like running my own stuff but the sysadmin overhead is so high that it’s taking all of the time I should be working on proper side projects.
Ansible is pretty quick to get started with and the tutorials are good. Be warned that some older content online doesn’t follow current good practise - some older options are deprecated, etc. Perfectly understandable, but I’d suggest reading the “Best Practises” guide before you get started seriously.
I suggest picking an ops / deployment / setup task that you at least kind of want to do anyway and learn Ansible by automating it as you go.
I still don’t sysadmin any of my own stuff, but I now have some familiarity with Ansible and have a set of playbooks for deploying a web-based RSS reader, setting up the services / accounts it depends on, and taking / restoring backups from it.
Despite having little patience, spending little time, and bearing severe prejudice against Yarvin, the last time I dug through urbit-related docs and descriptions I was impressed by the ideas and the parts of the system design that I understood. My previous comments (with a strong “you can’t be serious” flavor to them) seem ill-considered to me.
(edit) Having just re-read the set of c3 integer types and
Some of these need explanation. A loobean is a Nock boolean - Nock, for mysterious reasons, uses 0 as true (always say “yes”) and 1 as false (always say “no”).
it’s hard to say that urbit devs couldn’t be trying to fuck with people, just a bit.
it’s hard to say that urbit devs couldn’t be trying to fuck with people, just a bit.
If I remember correctly, Yarvin regrets this decision. He wanted to get outside of a given dev’s comfort zone to make them pay attention, but this one was a little too much.
I think the criticism here (despite being written in terms of humans, sapient horse-like beings, and Martians) is good, both in these quotes and elsewhere.
Still, [“Martians,” the Urbit developers] fail their public every time they use esoteric terms that make it harder […] to understand [Urbit]. The excuse given for using esoteric terms is that using terms familiar to Human programmers would come with the wrong connotations, and would lead Humans to an incorrect conceptual map that doesn’t fit the delineations relevant to Martians. But that’s a cop out. Beginners will start with an incorrect map anyway, and experts will have a correct map anyway, whichever terms are chosen. Using familiar terms would speed up learning and would crucially make it easier to pin point the similarities as well as dissimilarities in the two approaches, as you reuse a familiar term then explain how the usage differs.
[T]he Urbit authors are not trying to be understood, trying their best not to be, and that’s a shame, because whatever good and bad ideas exist in their paradigm deserve to be debated, which first requires that they should be understood. Instead they lock themselves into their own autistic planet.
Good thing that nobody cares whether I flip or flop because I’m back to “some useful clever ideas thoroughly embedded in a mountain of impractical clever ideas explained in a deliberately hard-to-understand mess promising many things that will never be delivered”.
To play devil’s advocate, that’s how it works in shell… there is only one way to succeed, but there are multiple ways to fail.
[Comment removed by author]
Shell has an if statement and && and || constructs. The 0 integer is true, and 1 and non-zero values are false.
It wasn’t for mysterious reasons. Way back when, the first versions of Nock and Hoon had chosen this specifically because it was different and they wanted to buck norms.
Can someone explain how using a class name like “fontSize-20” is any different than using the style attribute with “font-size: 20px”? This example comes straight from the docs on this library, but I genuinely don’t understand how that is useful.
(Edited for typo)
I tend to read the articles posted here in full, and to my surprise the only mention of this being satirical is in the Lobsters tag? What if someone else on the Internet discovers this joke?
Why did the nuclear plant at Chernobyl catch fire, melt down, destroy a small city, and leave a large area uninhabitable? They overrode all the safeties. So don’t depend on safeties to prevent catastrophes. Instead, you’d better get used to writing lots and lots of tests, no matter what language you are using!
That analogy doesn’t make any sense. How exactly would “tests” have prevented the Chernobyl disaster? Or on the flipside, how do override the safeties of modern languages? unsafePerformIO? That seems like a pretty terrible reason to question the entirety of your type system.
Ironic choice of example given that the Chernobyl disaster was literally caused by people executing a test:
During a hurried late night power-failure stress test, in which safety systems were deliberately turned off, a combination of inherent reactor design flaws, together with the reactor operators arranging the core in a manner contrary to the checklist for the stress test, eventually resulted in uncontrolled reaction conditions[.]
Slow going through Imagined Communities, a parenting book, and rereading a very slim volume to find the following quote…
Fascisms, past and future, are politically nothing other than insurrections of energy charged losers, who, for a time of exception, change the rules to appear as victors.
Nietzsche Apostle, Peter Sloterdijk
… in honor of the election.
Excellent way to implement DFS when you have a language with iterators or iterator-like things.
What I like especially is that this can be changed to a BFS by changing 2 lines of code; the pop and the stack[-1].
Maintaining a stack by yourself in code seems rather unfortunate (look at how much longer the code is than the generator version), and very much points to a language deficiency. There’s no excuse for a function call being slower than a list access, and tail call support would remove the stack depth issue.
Tail call support wouldn’t get rid of the need for a stack here because this function calls itself more than once aka “isn’t tail recursive”.
I would try to add the code from the article with comments showing where it is not tail recursive with comments, but pre-formatted text is not working for me at the moment.
You’re right. I find it hard to believe that stack depth would be an issue at least for a balanced tree, but again if it is Python really should offer a way to have deeper stacks in the language rather than have users end up writing their own stack emulation.
So this precise issue was the root of a performance problem in the clojure.tools.emitter.jvm project which I used to work on. Essentially the code generator worked by emitting OpcodeList = List[Either[OpcodeList, Opcode]]. The resulting structure was then traversed one opcode at a time, which meant that you had to recursively call next() on arbitrarily many lazy generator terms in order to get the next opcode you wanted to emit.
Using the stack of iterators pattern described here becomes an optimization then, because you elide n-1 calls to next() which have to walk back down the stack of iterators to the bottom-most iterator in order to get the next actual opcode you want. Because you’re traversing back up and down many many nested stateful iterators, there isn’t a way to optimize this recursion ala with TCO because you still have to go down n-1 iterator structures to figure out what buffer you’re actually taking the next() of.
In comparison with the stack of iterators pattern, your next() just needs to peek the top of the stack, take the next from that, and recurs only if it’s another iterator structure. This means you traverse down the entire depth of the tree precisely once, rather than doing so once for every leaf of the tree.
there isn’t a way to optimize this recursion ala with TCO because you still have to go down n-1 iterator structures to figure out what buffer you’re actually taking the next() of
The mutable variable in the example stack-of-iterators code, or the one you’d get from a clojure loop/recur, is what provides that benefit. In fact, Clojure performs this optimization for nested LazySeq objects: https://github.com/clojure/clojure/blob/e547d35bb796051bb2cbf07bbca6ee67c5bc022f/src/jvm/clojure/lang/LazySeq.java#L58 - Note that it flattens lazy seq thunks recursively and elides that work on future .seq() calls via the synchronized mutable field.
Sadly, the optimization is lost if you’re making your own nested structures because you have the vector -> seq -> vector loop going on. You could recover it a mutable variable for the top element on the stack. That’s precisely what TCE would provide as long as you could ensure that the recursive call was actually in tail position. Sadly, the ‘next’ interface is not ideal for that. You need something that returns both the next item and the continuation. So instead of (next seq) -> seq, you’d want (uncons seq) -> [head tail]
In the case of tools emitter, you could probably also have achieved this effect without mutability via an automatic splice-on-construction collection type. Instead of [:blah … [:foo …] …. :bar] you’d have (spliced :blah … (spliced :foo …) …. :bar) and pay that cost on construction.
Is there a standard name for a list uncons that returns an option/maybe? It’s a function I’ve often found myself wanting.
Pattern matching and related constructs generally eliminate the need to name it. However, I’d just call it uncons.
In Clojure, you can write (if-some [[head & tail] seq] …), but if I recall correctly, the underlying implementation still always uses first/next. Haskell fails better here with the colon/cons syntax in patterns, but trades a sequence abstraction for a concrete list type (at least without various extensions).
I’m thinking Scala here, and I’d prefer to avoid the match/case syntax entirely if possible since it’s very easy to use unsafely.
It’s a two-sided lemon market, insofar as most development jobs are also of low quality, and the cause of the lack of prepared, capable programmers isn’t that most programmers are stupid, but that work experience also has a pyramidal shape to it. There aren’t many people who have high quality work experience, because there isn’t a lot of high quality work experience to be gotten.
Most employers reject candidates as soon as there’s a whiff of negative experience: a job that didn’t go well, a company with a negative reputation, outmoded technologies. Then they try to underpay, or they hire people for one job but assign them to something else, and whine about their retention and hiring problems. I’ve met entitled “star” engineers but they’re rare and don’t age well, but entitled employers, who only want 9s and then expect those perceived 9s to wash their dishes, are so common that it’s unremarkable.
The best way to be employable, in this superficial industry run by non-technical and emotionally incontinent monkeys, is not to have any reason, of any kind, for someone to reject you. Even good things can be reasons to reject someone. Ten years of machine learning experience at a prestigious lab? “Too theoretical”. Over 50? “Resistant to change.” Made the mistake of becoming well-known for being too ethical? Well, see my experience. No wonder then that we have such an age discrimination problem; the only way to be compliant is to be a complete blank slate, and fresh college kids are best in that regard (even if most of them don’t know anything).
The truth about this massive lemon party is that no one has any real business need to make it better. Companies get funded and acquired and priced according to headcount, and MBA-toting “star” managers judge opportunities based on the sizes of the teams they’ll get to run (regardless of what those teams do) so there’s no real cost to the business in hiring lemon developers or managers, and there’s even less perceived cost in rejecting people for stupid reasons. It’s technology people who get hurt by it; not only do we end up with lousy co-workers, but when we try to hire good people, we see them getting rejected for stupid reasons.
The truth about this massive lemon party is that no one has any real business need to make it better.
That’s not totally true. There’s a niche of companies that do high-integrity or high-security development of software. Some warranty the results. They charge a premium of at least 50% over other companies. All the ones I’ve seen stay growing with referral from clients. There’s companies that internally do something similar with IT either in general (rarest) or for specific teams on critical stuff (uncommon). So, there’s some demand that comes in many forms with serious money to be made. Not entirely lemons but mostly lemons.
Rest of your post is spot on. Curious, what did you mean by “Well, see my experience.”
That’s not totally true. There’s a niche of companies that do high-integrity or high-security development of software. Some warranty the results. They charge a premium of at least 50% over other companies.
That’s a fair observation. Unfortunately, those companies seem to be very rare. Is there a list of them published somewhere?
Curious, what did you mean by “Well, see my experience.”
I’m glad that you’re asking, which means that my publicity has faded a bit.
I used to be a bit notorious for some high-profile actions that, while ethical to a fault, were judged negatively by some notable technology companies. There are rumors that I attempted to unionize one company where I worked; this is not true, although I have spoken sympathetically on the concept of software unionization (not that it will ever happen). My name was (erroneously) listed on a “suspected union organizer” blacklist in Silicon Valley for a while. In fact, I don’t know the first thing about organizing a union (and, at this point, I could care less about the Valley).
I’ve survived (and just barely) some attempts to destroy my reputation and career, and my faith in this industry is nonexistent. We are mostly in the business of helping rich guys, who have no concern for ethics or law or social justice, unemploy people. Software could be so much more, but we’ve let it become this disgusting business that I’m embarrassed to have been a part of.
Hate to hear it happened to you. However, I totally agree with software developers unionizing. I agree with most in middle class sectors doing that. The reason in IT is that the job is critical, it’s prone to layoffs, there’s lots of discrimination, mismanagement is rampant, performance metrics suck, and mostly importantly the major companies were price-fixing labor. They devolved into a cartel just like I suspected and what I predicted would happen to most oligopolies. Except this was unusual given it crossed some market sectors with fiercely-competitive companies instead of incrementally-competitive ones that are typical (eg AT&T vs Verizon). A situation this bad for labor that only drives money into CEO’s or founders' pockets is exactly the reason unions existed in the first place.
Now, people reading might think of bad unions with unreasonable demands and costs blowing out of proportion. Not necessary. When talking about giving more to workers, I like to use low-margin companies as examples given that higher-margin companies should be able to match at least what low-margin ones do. The best example overlapping low-margin and union is Kroger chain of grocery stores with UFCW union. I’ve actually read that contract and talked to their union representatives about all the stuff they deal with. Here’s what its terms are like:
Workers get paid a percentage better than minimum wage with earnings going up over time as experience increases. They get paid a higher rate for higher positions. These are standardized for common roles so there’s no discrimination on pay. Working in a higher position temporarily to cover a spot (eg someone is sick) earns you the higher position’s pay for however long you worked it. You get overtime if you work overtime regardless of what state says.
Company offers heath insurance, dental coverage, and retirement package. Union manages that side of things so company can’t screw with it.
Workers are guaranteed 10 hours between shifts so they can at least attempt sleep. They can waive it for extra money but can’t be forced to.
There’s standardized breaks and lunches so people can get some rest. The contract also mandates a breakroom so people can’t bump into them asking for them to work on their break. A biometric, time clock records the shifts, breaks, and lunches. That data can be read by both management and the Union.
Workers get a few paid sick days and 1-2 vacation weeks per year depending on position. The vacations are paid. For hourly workers, the system averages the hours they worked then pays the average.
Best for last: due process.
I’m dedicating a paragraph to that as I think it should be a national law. :) Union reps say the crap management pulls on workers is endless. Employees will tell you, too. Management is at Walmart’s level or worse. Everything from racism to forcing young people to skip lunches to intentionally losing paychecks to deploying practices that result in broken backs and shit. Union resolves most of it in heated negotiations without it going further. Sometimes it just takes a call.
Due process is the solution to this. It says Kroger can’t just arbitrarily fire a worker without basically paying them a good deal of money for a period of time. It’s why they didn’t do layoffs during recession. Just hour cuts. They have to come up with performance metrics and policies defining good work. Then, they must write-up workers who violate those with evidence they do. After a certain number, they can fire them. For any termination, a worker can challenge it. If unreasonable, the union will defend the worker first in negotiations and then in arbitration. Union reps say the workers usually get their job back since they were doing good but fired on a technicality for political reasons. Or rep sits in the store for hours seeing tons of violations that get no write-ups but that one employee is getting singled out. Due process with clear standards for performance is union’s first line of defense against bad management.
So, I look at all these companies that treat IT workers like shit. I notice that most of them have higher than 1.6% profit margin Kroger does. Many have tons of revenue. Management make good money with executives making a killing. The things above actually don’t cost much for most mid-sized or large firms. IT will still be plenty productive. They still oppose such things. That’s just because they’re fucking evil. ;) So, unions and then campaign contributions for better labor laws are necessary evil if workers gotta face evil every day.
Note: I did this one last night but the submission disappeared for some reason. Had to redo it… (sighs)
Do you have any ideas on what the core problem is? And do you think software as an industry is special in this regard or just ahead of the curve on these issues?
“If you stand for nothing, Burr, what’ll you fall for?” – Hamilton
I think that we’re often attracted by software because it promises us lucrative jobs and an opportunity to get paid for working in a world of abstraction and syntax (code). And “Big 4” cultures are designed to appeal to people who want to believe in institutional meritocracy, even though the evaluation of “merit” becomes, at an individual level, more political and malignant in the corporate world than it ever was in school.
I’d imagine that the cultures in government and research are very different, but private-sector software is this culture built up by people who (a) are attracted by the money and (b) don’t really stand for anything in an ethical sense. This isn’t a dig, because I was a greedy douchebag when I was younger too– trust me, I’m in no position to cast any stones– but now that I’m older and aware of just how poisonous that moral emptiness can be, what used to seem like an abstract shortcoming (i.e. “we’re not working on things that matter”) is now more existentially pressing.
I also think that we (and the media) tend to trivialize this by looking at, e.g., Snapchat and saying that our generation is being “wasted” on frivolity. That’s true, but the frivolity isn’t the worst part of it, and most of what the VCs are funding isn’t frivolous tech but WMUs– weapons of mass unemployment. It’s more pernicious than just “frivolity”.
To answer the broader question, I think that Corporate America is in (welcome) decline. Kids in college still want to be investment bankers and Big4 programmers, but less so than when I was in school. You’re seeing more interest in public service and research, and less blind herd behavior. As for the self-contradiction of Corporate America, Donald Trump (the id of the corporate class) is showing us that this greed, money-worship, and self-absorbed careerism lead to narcissism, then full-blown egomania, and then frank destructive fury that hurts everyone. Hilary Clinton is a flawed person but she is a public servant and she holds to her values and has a vision for where to take the country. Whether that vision is correct, I’d rather not debate here, but she has one.
The next-quarter mentality isn’t limited to software, and it seems to be destructive everywhere. It’ll take a lot of work to remove it. The pressures involved are too much for most people, right now, to resist.
I’m older and aware of just how poisonous that moral emptiness can be, what used to seem like an abstract shortcoming (i.e. “we’re not working on things that matter”) is now more existentially pressing.
I wish there was a club for people like us… You know what is also fun? Balancing your own company on the edge between making money and making a difference.
So you think the core problem behind low-quality jobs, funky hiring practices, and frivolous products is that people want money and don’t have ethical values? What is the source of that then? If it’s simply human nature then what is causing the change you describe in young people? If it’s not, then there must be a deeper problem.
I don’t necessarily disagree with you, but I’m not convinced either. I would love for one of these ennui-laden discussions to yield something even remotely actionable, and just saying “it’s greed” seems like a total dead end to me.
I’ll obviously let Church speak for himself here, but I my feeling is basically thus (very depressing read ahead):
The zeitgeist of the time is hopelessness, distraction, and greed. We are all, at some level or another, greedily trying to buy distractions from the hopeless nature of things.
At the risk of repeating platitudes, some observations: We use the iPhone, but pay no attention to the e-waste dumps and factory suicides. We inhabit Facebook, but try to ignore the massive surveillance it is predicated on. We praise Uber et al., but tacitly ignore the continued abuse of municipal laws. We espouse diversity, but only when it is applied to viewpoints we like. We enjoy fictional violence, but isolate ourselves ever-further from actual displays of force.
Simply put, we are all drifting further from the reality and impacts of our lifestyle decisions, and at least for people trained in systems thinking (e.g., developers) there is the undeniable slow creeping sensation on the back of the neck that things aren’t quite right, that the sums and figures don’t quite come out correct, and that sometime soon the music is going to stop and we’re all fucked. Those of us not occupied with academic pursuits and the makework of refurbishing the Javascript ecosystem, that is.
Knowing this, and knowing just how ruthless and brutal the system is about optimizing away things (read: people) that are extraneous to a particular economic objective, we make the rational decision and start trying to grab as much money as we can before we can’t anymore.
The obvious posing and fake advertising and frivolity of social media makes it even easier to see our fellow man as marks, makes it easier to justify extracting maximum revenue from them. Whether it’s the dolt retweeting everything Trump says, or the VC who cuts a check on anything which mentions ML or IoT, or just a dumb public official who needs to show their constituents that they are “investing in the future” is of no practical matter: there are just the people who have resources, the people that can become resources, and the people who know how to harvest resources. As developers, we think ourselves in the third category though we’re basically just the second.
I used to think that there was enough room on the bus for everyone, that we could elevate and enlighten and teach and move people (as a whole) forward. I’m increasingly of the opinion that there are simply the people driving the bus, the people under the bus, and extremely limited seating for folks that are neither.
There’s no immediate solution either, right?
Do you believe in your community, in the common good? Both candidates in the current election have basically disowned the other’s would-be voters. The legislature will merrily play chicken with budgets in order to score political points. Even the very notion of belonging to one’s country or civilization is under attack by various philosophies popular in educated circles!
How, how are we to believe in the greatness of the people and the goodness of man when we are constantly reminded of this? How are we to devote our efforts to holding up a tottering dam holding back chaos when that act is criticized by some, used as a cheap revenue source by others, and actively hindered by the rest of folks too stupid not to play with matches next to the piers?
We have an additional mokita: more than ever we individually are both more aware of our mortality and limitations and at the same time further isolated from them. War and famine are things that we see on Twitter and the news but which never really effect us. We can learn more than ever before about any given subject, and yet we are continually overshadowed by stories and articles about people who are either best in their field or just exceptionally good at advertising.
In such a situation, what is the value of your life? What is its purpose? What makes it special or desirable? Why bother? At a large enough scale and with good enough coverage, we’re all just Brownian noise in the lifestream–and that’s where we are today. There’s no point in being the best person in town at doing foo, because we all read about foo on the ‘net and know how far we’d have to go.
So, instead, maybe we can get enough money to paper over that existential void. Maybe we can buy enough things or influence to secure a place in history. Maybe this time we’ll pull it off…maybe.
Knowing this, and knowing just how ruthless and brutal the system is about optimizing away things (read: people) that are extraneous to a particular economic objective, we make the rational decision and start trying to grab as much money as we can before we can’t anymore.
This is right on the nose.
Before 1980, when many of our parents were growing up, it was socially unacceptable to say that you just wanted to make a lot of money or make connections so you can get a job where you don’t really have to work. Donald Trump became the zeitgeist of a materialistic, crass era (the 1980s) for a reason: it just wasn’t acceptable to be like him. Avarice and egotism still existed (see Mad Men) but were considered crass and pathological. And consider that the office politics of Mad Men, although nasty in their time because advertising had a similar flavor to entry-level investment banking today, aren’t all that bad by the modern standard. In the ‘70s, staying till 6:30 meant you were a hustler. Work was a more civilized game, and people played for the long term. Greed and ego have always been factors, but people were more intelligent in going about their objectives and there was more of a long-term mentality which precluded a lot of the worst plays.
The era of the lifelong technologist seems to be drawing to a close, except in academia and in some government agencies. In the private sector, this is definitely a game where if you’re not a founder or an “angel investor” (read: rich) by age 40, people will ask why. These days, being a software engineer means contending with micromanagement (Ministry of Agile) that is designed for children, that “we” have had to accept because the conscientious objectors have all been fired and replaced with compliant, often belligerently incompetent, unprepared neophytes out of college or “boot camps” (which are, mostly, fly-by-night trade schools with no quality control). The pay is decent. Not amazing, but decent. It’s one of the few genuinely middle class jobs left. Still, the low status of the job means that as soon as the market softens, programmers are going to take a hard fall while the management will be just fine: VC associates who don’t make partner will circulate elsewhere in private equity, founders will end up in $250k jobs at hedge funds, and startup executives will ride their coattails. Meanwhile, a generation of programmers will be left in the cold with nothing to say for itself.
“If it’s simply human nature then what is causing the change you describe in young people? If it’s not, then there must be a deeper problem.”
I threw a savant brain at the angle for years trying to figure it out. I eventually did come up with a model that explains it. Tried to find every work-around I could to change it. Ended up really depressed when I couldn’t find one permutation likely to work without straight-up revolution. It got to the point where I could predict what would happen with national events or elections at an abstract level. I originally thought it was emergent behavior but it’s increasingly clear the worst parts of what are happening are by design and emergent behavior within that design. Occasional outliers but the system the elites put in place is pretty air-tight.
It’s too complex to explain in one comment but I’ll give you a few key points.
It starts with capitalism at banking and industry levels. They realized certain practices would get them rich. One of those was screwing workers. The vast majority of people starting businesses are selfish enough to want to be very rich. They managed to use money to Congress and media presentation to voters to make sure vast majority of wealth produced by majority of workers went to tiny few with similar interests. They started forming monopolies using their vast capital. When that was busted up, they started forming oligopolies w/ cartel agreements to prevent competition. They get outrageous CEO compensation since the boards that keep that in check often have other CEO’s, founders, etc They’re all on each others' boards (“interlocking boards”) with emergent behavior of “you get me rich, I’ll get you rich.” They also pushed for patent and copyright to be strong to allow both selective monopolies plus prevent or financially drain competition. We’re already a plutocracy at this point where one cartel controls whole financial system plus others control most markets key for survival. Worse, the incentives say reduce cost/quality/safety while charging more.
Congress will save the day with laws “for the People,” right? Congress and Executive branch are corrupt so no. First, they need tons of money to get elected. A Presidency costs around $200 million these days. Congressional seat at least millions probably tens. That means only people that can run are rich people or those backed by them. Also, existing system lets current legislators attempt to filter out anyone less corrupt that bypasses that problem. The voters themselves are extremely superficial where character and voting/business history are mostly ignored in favor of whatever a candidate looks like, does in personal time, says during campaign, etc. What professional liars say >= what they do or did. (???) Once in Congress, they spend most of their time preparing for next election. They pay back contributors, mostly elites, with laws that benefit them at people’s expense. Most Congress has portfolios of stock in dirtiest companies. They also ensure votes by sending massive amounts of pork to their districts which is why they waste so much on “Defense” spending building shit we don’t need & broken welfare systems. Those are tied to millions of votes directly impacted by changes. All adds up to preserve the status quo.
Media will inform us so we change votes and overthrow the system, right? Media is a bunch of for-profit corporations whose business model is making money off adds by getting people to look at the screen as long as possible. They are not there to inform! It’s a business! Getting people’s attention meant they covered key stories, had important people on the air, etc. The also are run by elites who like the system as it is since they’re rich and powerful. Changes would impact them. So, they always practiced self-censorship where they collectively avoid topics or avenues of investigation that would lead to radical change while focusing on issues that appeal to each’s demographic with a mix of emotional responses. They’ll break veil of censorship if someone hits critical mass where they can’t be caught ignoring it. At that point, they either present it in a non-actionable way focusing on blame instead of solid response or start covering shock stories that distract people. This already worked way too well to point the corporate media is single greatest threat to American democracy in existence. Fox improved the model by basically turning up the bullshit to extreme levels with a format that focused on people fighting with each other, use of fake experts, tying message more to viewer, and getting more viewer involvement that doesn’t really do anything but feels like it does. Record-breaking profits and dominance on right-leaning side followed. Others copy their techniques now. You often can’t talk about a key issue for over a minute before host interrupts you or Americans tune out. So how can you change anything again? Go to another of those tens of thousands of stations that are all owned by the same 20+ for-profit corporations with interlocking boards? Good luck.
Our eduction system will help people figure it all out, right? I still haven’t read Gatto’s Underground History of Education to see if it’s legit or bullshit but the abstract I saw a while back seems true. It was elites like Rockafeller that started the system as industrialization kicked in where they needed tons of workers smart enough to do the shit jobs they were creating that made elites rich. As Carlin said, “smart enough to operate the machines but not smart enough to” know how much they were being screwed. The education system dictated what people would learn at what pace with promises of them making millions over time if they followed it plus severe punishments for those that didn’t. The process itself combined rote memorization of material from authority figures, rigid routines, punishment of dissent, and simple metrics to assess skill. Smart people had to teach themselves shit constantly outside of school plus fight with educators over being taught ineffective methods “because it’s required by the bosses.” Basically like working in a factory or big corporation. It’s not education people: it’s conditioning humans like dogs with bare minimum in education. No wonder elites sent their own children to expensive private schools, got them tutors, and brought them along to see how they did business at executive level. That class gets educated where I’m still fighting to learn some aspects of what the C-level people do.
There’s also surveillance/police state, what the U.S. military actually does, systematic suppression of dissent from voting rights to business, and so on. However, the above combined with human nature are all that’s necessary for a successful plutocracy. Human nature is herd-minded, terrible at long-term risks, focuses on here/now presented to our faces, prefers easy battles to hard ones, wants to maximize individual gain in local context, and has trouble being vigilant. The education and media combine to create a mental maze for average person where they go in the directions that are safe for the system. Some will fight it in ineffective ways while others will defend it thinking it benefits them. Some will make themselves and elite investors rich improving something, most will expand on the profits of incumbent elites, some break away from the system without achieving critical mass to affect it, and rest fall through the cracks. Capital almost entirely in rich’s hands combined with corrupt, capitalist Congress reinforcing system in law means the battle will always be uphill. Media continues to suppress key issues, like how most problems are due to Congressional bribes, but will endlessly repeat or generate frivilous stories that maximize their revenue. Americans stay fighting with each other in the maze instead of the elites that built it. Most successful, reality-distortion field I’ve ever seen.
So, there it is in a nutshell. I have no hope of fixing it. The system is too robust after the decades they spend working on it. There’s a small chance that a solution can happen involving Internet media if the both the bait messages and the presented solutions are ultra-simple. The problem and solution have to be simple with candidates and legislation ready to go. They have to be willing to vote people out of office. The source(s) can’t make one mistake in accuracy plus need several tuned to different demographics all pushing same thing from different perspectives. There’s a chance that several classes of problems could be knocked out that way. Most aren’t using this strategy, though. The few that are use it on the wrong messages that just add to lower classes fighting each other.
Your comment is just a bunch of rhetoric without much content.
You’ll have a heard time convincing anyone that companies don’t care about the productivity of their engineers. Without productive engineers, they aren’t making money. Why would they go against their own self interests?
And of course there’s a cost to the company if they hire a lemon. Have you ever worked on a team with a bad developer? A single bad developer ruins the productivity of the entire team they are on. That’s why companies are so careful about avoiding bad hires. A single bad hire is equivalent to missing out on multiple good developers.
A single bad developer ruins the productivity of the entire team they are on.
What would you call the manager that allows this to happen? Inexperienced? Ludicrously incompetent? Is this just the assumption, that no manager ever was able to stop someone from building a project’s foundations out of balsa wood or nitroglycerine?
It takes more than a single bad individual contributor to ruin a whole team. Management is supposed to be there to evaluate progress, identify, and address problems. Yes, management does not always succeed.
Management are usually the cause of the problem - they are there to manage and lead - many are not competent at either task - hence the Peter Principle
I had this recently happen to me. We had an individual who would continuously argue with others. Management repeatedly tried coaching him, hoping he would improve. It wound up taking two months for management to decide it would be better to let the individual go. After he was gone, everyone immediately felt much more productive.
There’s a steady stream of articles and comments about how broken hiring is, how poorly good developers are treated, discrimination against older ones to get less-skilled people at lower cost, how most projects fail due to poor management, how execs look at IT as a cost center knstead of strategic enabler… all this stuff indicates they dont give a shit in practice. Most also will punish attempts at reform.
So, his comment is pretty consistent with what I read from insiders instead of rhetorical nonsense.
Many specific problems with enough detail for HR or senior executives to take action on. His comment would be useless if he just said the two things you just said. Strawmen are easier to knock down, though.
“If all speech is emotional, then we have to ask what political reasons here are for labeling some speech as emotional and some speech as unemotional, and what political purposes this labeling has.”
It looks like you just redefined the word, then poke holes in peoples points that are made using a different meaning of the word? What political reasons are there for someone doing that?
shrug
All the points about speech being emotional are pretty legitimate from the viewpoint of psychology or linguistics. You want to play word games, so let’s identify the key ambiguity: “emotional” has at least 2 meanings. One of these meanings, “of or relating to a person’s emotions”, is a relatively precise and neutral term and it is in this sense that all speech is “emotional”. The article basically addresses that another meaning, “(of a person) having feelings that are easily excited and openly displayed”, is often used in a pejorative sense.
All questions of majorities, minorities, demographics, power, and bias aside, I’d take this argument at face value for arguing something like, “openly displayed emotion in communication does not always imply irrationality or undue partiality; restrained or inscrutable emotion in communication does not imply rationality or impartiality.”
edit: to directly address your question, something something something-something feminism something something-something patriarchy something-something bias-masquerading-as-impartiality.
Pretty much matches my thoughts on why people should learn C, even if they don’t use it. And by learn, I mean complete at least one non trivial project, not just a few exercises.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
There’s a place for portable assembler that’s a step above worrying about whether the destination is the left or right operand and what goes in the delay slot and omg so many trees. And whether it’s arm or Intel or power, all computer architectures work in similar fashion, so it makes sense to abstract that. But we shouldn’t abstract away how they work into something different.
I have mixed thoughts on rust. As a practical systems language, sure, great. No cost abstraction, ok, sure, great. But for understanding how computers work? To compile by hand? Less sure about that.
Rust helped me understand how computers work much better than C (I’ve learned both C and C++ at Uni and had to implement some larger projects in it). It just adds a lot more explicit semantics to it which you need to know in C. It keeps strings as painful as they are :). The thing that’s really lacking - and I agree on that fully - is any kind of structured material covering all those details. If you really want to mess around with computer on a low level - and not write a tutorial on how to mess around with a computer on a low level with your chosen language - C is still the best choice and will remain so.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
I really agree with the idea that Rust makes explicit a long catalog of things that one can do in C, but should not*.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#.
The missed point here is that important parts of the Java or C# toolchains or runtimes are written in C or C++; the argument rests on C (for the time being still “the portable assembly language”) being foundational, not just ubiquitous. C is almost always present, right above the bottom of the (technological) stack.
*Unless interacting with hardware / doing things that are inherently unsafe.
On further reflection, I think I missed @tedu’s point, which is something more like, “C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C.”
Which would probably be true in the absence of aggressive UB optimizations. But, if horses were courses…
“C is special because it’s pretty easy to mentally translate C into assembly or disassembly into C if we assume it was compiled with -O0”
Here’s an example of something I ran into the other day that’s easy to do in C but unnecessarily hard to do in Rust: adding parent pointers to a tree data structure. In C I’d just add a field and update it everywhere. In Rust I’d have to switch all my existing left/right pointers to add Rc<T>.
I’m willing to buy that a binary tree implementation in Rust encodes properties it would be desirable to encode in the C version as well. But once you start with a correct binary tree Rust doesn’t seem to prevent any errors I’d be likely to make when adding a parent pointer, for all the pain it puts me through. There’s a lot of value in borrow-checking, but I think you’re understating the cost, the level of extra bondage and discipline involved.
Most arguments about C seem, though, at a closer look, end up as “it’s there, it’s everywhere”. The same argument can be made about Java or C#. A agree that some C doesn’t hurt, but I don’t see how it is as necessary as people make it to be.
It is everywhere though, what do most people think runs microcontrollers/avr/pic/etc… chips? Generally it is a bunch of bodged C and assembly.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
The argument here is a bit different, you can avoid java (I have zero interaction with it), but you can’t realistically avoid dealing with C.
I can totally do that. Lots of new high-performance software (Kafka, Elasticsearch, Hadoop and similar) is written in Java, so if you are in that space, you can realistically avoid dealing with C. You will certainly avoid C if you do anything in the web space.
You will certainly run C software, but that doesn’t mean you need to know it.
Ah I see the disconnect, you’re using avoid in reference to having to program in. I’m using it in reference to use. In which case we’re both correct but talking past each other in meaning.
Rust is pretty great, no arguments there, but I think at least one of the author’s points is that C as a language has remained INCREDIBLY stable over decades.
Can you honestly say that immersing students in Rust will still have the same level of applicability 10 years from now?
Strong disagreement to part of your point. C in the 90s was very different to C in the early 00’s which is very different to C now. The standards may not have changed that much (though eg C89, C99 etc were things, which did change things), but the way compilers work has massively changed, eg in aggressive UB handling, meaning that the effective meaning of C code has completely changed, and continues to be in wild flux. C as a language is amazingly unstable, in part due to the language specifications being amazingly underspecified, and in part due to so many common things being UB.
You’re quite right. I learned C back in the K&R days, and even the transition to ANSI was readily noticeable, if not earth moving.
My point is that even having learned K&R, the amount of time it takes for me to come up to speed is relatively trivial. I would argue that this is nothing compared to say even the rate of change in idiomatic usage of the Java language over time. I learned Java back before generics and autoboxing, to say nothing of more recent Java 8 enhancements, and the Java landscape is a VERY different place now than it was when I ‘lived’ there.
I would disagree that the time to come up to speed is trivial in comparison - you are comparing apples and oranges.
The time to superficially come up to speed is trivial in comparison, but the time to actually learn how to not write heisenbugs into your code that you did not used to have to worry about - well, unless you are extensively fuzzing and characterising your code, you don’t actually know that you even have come up to speed.
[Comment removed by author]
They probably don’t work the same, unless you have extensively characterised and fuzzed them, or unless you have done binary diffs of the executables and they are identical - and the latter I would not believe, as there have been other differences in what output compilers produce over the years that should show up.
That is, my meaning is not that the changes will result in code that produced tetris now producing space invaders. They will result in code that produced tetris now producing tetris with additional weird heisenbugs that can be used for eg arbitrary code execution.
Edited to add: Also, the part about the C standard being underspecified is what means that C programs do not have an inherent meaning - their meaning differs massively depending on which compiler, which architecture etc. For example, how some cases of bit shifts are handled differs completely between x86 and ARM, and has historically differed between compilers on x86.
Most people’s continuing use of c89 is either cargo culting or the need to support very old compilers. GCC and Clang have both had adequate support of c11 for years now. If you’re working with an old microcontroller you might be stuck having to use their patched gcc 3.x (a very bad time, speaking from experience) but aside from this sort of situation using c99 or c11 is perfectly reasonable.
Most people are indeed in an environment where they can use either c99 or c11, but since c11 is not quite a superset of c99, c89 seems to have retained a certain role in C programmers' mental models as the core of C. The real working core today could in practice be a bit bigger, essentially c89 plus those features that were not in c89, but are in both of c99 and c11. But that starts getting more complex to think about! So if you want your stuff to compile on both c99 and c11 compilers, just sticking to c89 is one solution, and probably the simplest one if you already knew c89.
I personally wrote mostly c99 in the early 2000s, but one of the specific features I used most, variable-length arrays [1], was taken back out of c11! (Well, demoted to an optional feature.)
[1] Perhaps better called runtime-length arrays. They aren’t variable in the sense of resizable, just with size not specified at compile-time; usually it’s known instead at function-entry time.
What does fully implement mean? c99 without appendices? yes, c99 with appendices? no.
Regardless clang is your best bet for something beyond c89 that works.
That wasn’t my argument. I was opposing the argument that C is the only language usable for that.
I’m quite aware that Rust is currently too young to teach it people as a usable skill on the job for the future.
I agree with flaviusb though, idiomatic C has had incredible changes over the last few years.
That’s kinda what I’ve been designing. At a high level but without the low-level chops (so far) to make it real. So since I can’t do, I teach :)
My Basic-like language Mu tries to follow DJB’s dictum to first make it safe before worrying about making it fast. Instead of pointer arithmetic I treat array and struct operations as distinct operations, which lets them be bounds-checked at source. Arrays always carry their lengths. Allocations always carry a refcount, so use-after-free is impossible. Reclaiming memory always clears it. You can’t ever convert a non-address to an address. All of these have overhead, particularly since everything’s interpreted so far.
While it’s safer than C it’s also less expressive. A function is a list of labels or statements. Each statement can have only one operation. You don’t have to worry about where the destination goes, though, because there’s an arrow:
x:num <- add y:num, z:num
You don’t have to mess with push or pop. Functions can be called with the same syntax as primitive operations.
It also supports tagged unions or sum types, generics, function overloading, literate programming (labels make great places to insert code), delimited continuations (stack operations are a great fit for assembly-like syntax). All these high-level features turn out not to require infix syntax or recursive expressions.
I’m working slowly to make it real, but I have slightly different priorities, so if someone wants to focus on the programming language angle here these ideas are freely available to steal.
In many languages, a loop that concatenates (sums) a sequence of integers looks a lot like one that concatenates a sequence of strings. The run time performance is not similar. This is obvious in C.
I’d expect ropes to be more popular in high-level languages, to be frank. Java’s StringBuilder is a painful reminder that some supposedly “high-level” language designers still think in C.
As far as I know “str += postfix” in a loop is going to be O(n^2) in c#, ruby, Python, JavaScript, and lua. What languages are you thinking of?
V8 uses ropes to represent javascript strings almost all the time (IIRC there are exceptions for e.g. very large strings).
I’m not thinking of any specific language, I’m thinking of a specific data structure: https://en.wikipedia.org/wiki/Rope_(data_structure)
Oh, you mean people should use ropes? In my experience, c# devs use StringBuilder and everyone else uses some spelling of Array.join(). I knew about, but still didn’t use, ropes when I wrote c++ code because what I really wanted was ostringstream.
In C++, using std::ostringstream is understandable, but how come high-level language pretenders like C# and Java force you to use StringBuilders?
Because for most development purposes the interface used when building up strings is more important than the performance. And StringBuilders are a common and well understood interface for building up strings incrementally.
What I’m saying is that StringBuilder is too low-level an interface. I want to be able to concatenate lots of strings normally, and let the implementation take care of doing it efficiently.
In theory you’re right, because StringBuilder is less widely known, less intuitive, harder to use, more verbose, etc.
In practice, high level abstractions with “magic” optimizations can cause problems if you’re really depending on the optimizations to work. E.g. you switch implementations, or make an innocuous change that causes the optimizer to bail out. It turns out even some high level languages aren’t really committed to their ideology.
I don’t want “magic optimizations” - far from it. I want a language where the cost of each operation is a part of the language specification, so that I don’t need to rely on implementation specifics to know that my programs are efficient. For instance, if the language specification says “string concatenation is O(log n)”, then naïve implementations of strings are automatically non-conforming, so I can assume it won’t happen.
The problem is that most strings are small – in fact, most strings would fit in a char* – and not concatenated all that often. In the common case, ropes are an incredible amount of overhead for a very small, common data structure.
Then you can use a hybrid implementation, where small enough strings are implemented as arrays of bytes storing the characters contiguously, and large strings are implemented as ropes. In most high-level languages, this doesn’t impose any extra overhead, since every object already has at least one word of metadata for storing dynamic type info and GC flags.
A portion of this story does cover the “culture” surrounding dev tools and setups but most of it makes me want to sarcastically suggest that lobste.rs add an editorial category of tags and tag this author-mostly-clueless.
Perhaps this is just the disdain I feel for having stewed in far too many programming subreddits, Orange Website, and of course lobste.rs, then watching a reporter (a “"civilian”“, a ”“normal person”“) discover Programmers Having Opinions, especially About Aesthetics.
Opinions about aesthetics, you say?
Is anyone else thinking these headless modes in Chrome and Firefox are going to be targeted by malware to do more realistic browsing/spamming from compromised machines without having to download anything extra?
The problem is not the ‘browser’ but your install footprint and how you economically make your traffic come from residential IPs.
The problem with a headless browser is that your maliciousness is limited to what you can do in a regular browser anyway, otherwise you need a pile of C++ developers that can wrap themselves around a browser codebase to fork your own version.
MethBrowser, the name the creators used for the marketing ‘splurg’ called MethBot, used a much saner strategy where they wrote enough node.js (mocking window, document, etc) to make JavaScript scripts think they were running in a web browser (including having Flash running). This meant development was 100x easier and faster plus they could mutate content as it loaded in too.
Most amusing you was to save bandwidth instead of downloading the .mp4 they used a locally stored Lego technics video show some automated crank thing. Ironic I felt and side creasing to watch.
The original author, a contractor by looking at the git history, an interesting read in its self, is a bright guy and what he put together was simply stunning.
Their downfall was that the sandboxing sucks and you can break out using constructor.
Depends on what you define as maliciousness. Some headless browsers can be scripted with JS; that might be all you need to systematically misuse a web property.
It is more the JS that runs in the browser than what you are using to drive it which is the limiting factor I am referring to. For example, in a vanilla headless browser, from JavaScript you cannot ‘massage’ what location.ancestorOrigins says to other scripts running on the page.
it looks like headless mode will be a command-line argument, so… for malware to run a headless Chrome or Firefox, it may already need to be able to start a new process. If the malware can already start a new process, not needing to download anything extra doesn’t seem like a huge ease-of-compromise improvement, but I am literally Just A Random Non-Security Developer.