On a cursory look this smells pretty crackpotty, but I’ve learned to mistrust that particular intuition. Hopefully someone who knows more about PLT can comment. :)
If you enroll in the beta program, you’ll get the OTA update within seconds. :)
seems like there’s no going back from the beta program without losing data though… :/
From my reading it looks like if you unenroll while your device is running a beta you’ll have to wipe, but not otherwise.
You may opt out of the programme at any time to return to the stable, public version of Android. Note: If you opt out when your device is running a beta version of Android, all user data on the device will be wiped.
To clarify - if you enroll in the beta program, it will push the stable version of 7.0. You can install the update then immediately unenroll, which won’t install 7 since it’s stable. Worked on my Nexus 6p.
Did exactly this last night. Worked flawlessly. Scare text is pretty scary though.
This reminds me of gimd*, something I was enthusiastic about a few years ago.
* gimd (pronounced gim-dee) provides a small distributed database layered on top of the powerful Git version control system.
I’m trying to think of protocols or standards that have been implemented by many parties, even competitors, that have been regularly updated and improved over years or decades, and I’m drawing a blank. I’m curious to find strategies that have worked. It feels like a lot of “successful” standards are immediately trapped into stagnation by their popularity. Even standards designed with extensibility in mind inevitably want some vital change that’s backwards incompatible and we end up with the next version stuffing everything into a comment (early inline js), tolerance of invalid data (unknown html tags to add stylesheet links), special constructions (docstrings, js pragmas), etc. The alternative seems to be forming a committee and drafting an RFC or other standard that is considered fast-moving if a new version comes out once per decade (and committee members don’t destroy it through as a stalking horse or proxy war).
HTML has been a nightmare. IP stalled for decades at IPv4. DNS stalled for longer. CSS had a rough start and suffers fits and starts, but the “core and modules” system of CSS3 (and “CSS4”) has done well over the last decade. Email has deliberate extensibility in headers and protocols… but it took about 15y after basic client-server encryption was an obvious necessity for it to become ubiquitous, end-to-end encryption will probably never happen, and IMAP makes me miss the 90s browser wars. Jabber replaced a mess of proprietary protocols for a few years before fracturing back into walled gardens.
I feel like I’m missing something obvious. or totally ignorant of some industrial CNC standard or avionics protocol or something, can someone point out a good example of a long-lived protocol with regular improvement? Or maybe my scale is wrong - it takes 5+ decades to replace hardware standards (adding grounding pins to the NEMA electrical outlet standard), maybe I should be thrilled it only takes 1 or 2 in software.
I think the problem is that you’re thinking about this the wrong way, and that a lot of other people are too.
The fundamental question is what is the benefit of a protocol or standard that constantly improves?
I think an answer to this is “not a hell of a lot”.
IPv4 has worked and worked well enough for decades. IPv6 has failed in a lot of ways because people kept piling on shit to make it spiffier and more academic and awesome, and in so doing keeping it from ever being easy to roll out or to be quite finished. Likewise, something isn’t “stalled” if it is continuing to deliver value.
HTML isn’t that gross, especially once CSS came out. It’s as good as it ever was for displaying documents. It’s a reasonable approximation at a 2D scenegraph with automatic layout capabilities. Certain implementations were terrible, but that’s not the fault of HTML but instead the vendors.
The main takeaway here is that both worked and worked well enough, and it was worth more to freeze them than to keep updating them. Protocols are centered around conversations, and if the content matter of a conversation doesn’t change (e.g., how to send and receive byte buffers with KV metadata as in HTML) there is no reason to continually add on things that are outside of that.
Thank you, I appreciate this response. To unpack what I meant by “stalled” in the case of CSS, at points there were the problems that obvious “next features” that people wanted were kludging around for years (flexbox address most of the missing layout/grid features) and also painfully uneven support, especially for the first ten years or so.
Ah, thank you for your clarification!
USB is a reasonably good example of standards that are well thought-out and long-lived, and yet manage to productively evolve, often with impressive backwards compatibility.
The set of OS semantics we broadly call Unix have lasted a long time
The versioning schtick of TeX and Metafont are aimed at answering this question; whether they can be said to be a big success is a different question though.
IP stalled for decades at IPv4
I think that’s more “if it’s not broke…”. TCP has had extensions and options and development. IPv6 was standardised long in advance of it’s actual need (maybe that’s the problem…)
Email has deliberate extensibility in headers and protocols…
They were retrofitted in a back compatible way. RFC821 doesn’t know about EHLO, RFC822 doesn’t know about MIME or charsets in headers.
IMAP makes me miss the 90s browser wars.
Interesting. The protocol is opinionated, but I’ve not followed recent developments - what’s the problem here?
[…email…] end-to-end encryption will probably never happen
S/MIME and PGP have been standardised for a long time. I think that’s not a protocol failure but an incentive/commercial/UX failure. (One can argue that the protocol forces poor UX, which is perhaps fair but I’m not sure I understand that well enough).
On balance, I’d say the RFC approach has worked well. I don’t know how healthy the current IETF RFC system is but in the past lots of people put the effort in to build interoperable systems which could run as “internet scale”.
I actually think the problem is that since google search demonstrated you can scale a “single website” to “internet scale”, the assumption that you need to implement scalable, interoperable protocols to do big things on the internet was broken, perhaps reducing the incentive and importance of standardisation efforts.
We should just stick with Gopher
x86 instruction set
C and C++ languages
The j2 is a nommu processor because sh2 (the processor in the Sega Saturn game console) was, and the last sh2 patent expired in October 2014. The sh4 processor (dreamcast) has an mmu, but the last sh4 patents don’t exire until 2016.
…I guess I’ll wait then? ;)
If this is the “BizX LLC” that bought them, I wouldn’t hold out much hope of any meaningful future.
I particularly enjoyed the comments.
I’m by no means a supporter of “Rummy,” but while I got a chuckle out of them, it wasn’t the right forum.
This is really good. The only open source GSM stack I’m aware of is woefully behind, and the mobile baseband industry is shaping up to be an oligopoly at best: ISTR hearing that Apple wrote their own baseband firmware, and Qualcomm is pretty much the other player.
I now use Adobe Source Code Pro after many years of Lucida Console then Consolas.
Same here, though it seems to have rendering issues in Visual Studio so I use Consolas at work.
Still cranking on The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes.
More Info on this unfalsifiable, but nonetheless fascinating, theory.
I hate to be the guy sort of putting down an article with the title “Be Kind”. So right here at the top, yes, be kind! That said, this person appears to have swung from one extreme to the other. Neither one being healthy.
I was just callously indifferent
Yep, being callously indifferent isn’t healthy or normal – even work “families” require some level of personal investment and understanding.
But believing deeply that I am responsible for how I make others feel…
Now he went to the other extreme, he is putting forth that he is responsible (in control of) how other people feel. That is IMHO, worse than being callously indifferent. At least when callously indifferent you view the other person as an equal. When you believe you can control someones emotional state, you are stripping that person of agency, dehumanizing them.
There is a middle ground between being “indifferent” to peoples emotions and believing you are in control of them. Maybe he is someone who isn’t great at working in the “gray area” so this is a useful model for him.
being callously indifferent isn’t healthy or normal – even work “families” require some level of personal investment and understanding.
You’d be amazed how long it takes some people to learn this. Many people never do–I would put it down to bad people management, which is absolutely endemic in the industry.
Now he went to the other extreme, he is putting forth that he is responsible (in control of) how other people feel.
“Responsible for” and “in control of” are not synonymous. I’m responsible for my team’s productivity, but I don’t directly control how they do their jobs. I mentor, I coach, I help make decisions, I try to identify the reasons for success and failure, but I’m absolutely reliant on the team to manage their own day-to-day tasks.
“Responsible for” and “in control of” are not synonymous.
One is required for the other. Without control, being responsible for something is nonsensical. You can’t control the weather, hence I can’t get mad at you if it happens to be too hot out today! You can try to be a good person, but you can’t control how others will feel, and to think you can diminishes them and aggrandizes you.
I’m responsible for my team’s productivity, but I don’t directly control how they do their jobs.
Sure you do. You choose not to use that control to micro-manage, but MANY managers DO love micro-managing, having direct control over how employees do their job. Additionally, you have control over WHO is doing that job – you can FIRE people who can’t be “controlled”. If an employee you had opened up every file and just typed “AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA” in it all day – you might not be able to ever force that person to stop – but you WOULD fire them and replace them with someone you could “control”.
If you couldn’t fire your “I only type A’s” employee, and you were fired for his ineptitude, then you would be responsible without control, which would be terribly unfair.
I think the author probably meant something closer to “I am responsible for both the intended and the unintended impact of my behavior on the feelings and well-being of others”.
Maybe he did, but how does that make it better? I hope it was just a poorly written idea and he actually isn’t so arrogant or misguided to believe he controls others feelings.
Do you agree that to be responsible and accountable for something you have to be in control of it? Do you agree that me holding you accountable for things you have no control over would be both illogical and immoral? This idea of “being responsible for other peoples feelings” is toxic on both sides of the coin.
From the POV of the person who believes they can control others “feelings”: Believing you can control others emotional response is dehumanizing, it strips people of agency. Imagine for a moment if you actually COULD control other peoples feelings rather than just your actions, imagine the control you would wield over them, and in turn the level of responsibility that would go with it (Great power, great responsibility, etc) . Luckily, it is profoundly untrue, we decide how we process input and respond to events, we control our own emotions.
From the POV of the person who believes they can’t control their own “feelings”: Believing you can’t control your own emotional response is exceptionally dangerous thinking. Stalkers often believe that the victim is “responsible” for making them fall in love with the victim, and consider stalking a logical response. Domestic violence abusers often think the victim is “responsible” for making them angry, “Why do you make me hit you?”… lots of vile things are justified using this logic.
No matter how you slice it – believing Human A is responsible for the feelings of Human B is unhealthy. You control your actions, not how people respond to them, so just try to do the right thing according to the only mind you can know, your own. You are not responsible for (nor in control of) other peoples emotions, and they are not responsible for (nor in control of) yours.
I don’t think what you’re working from represents my beliefs or my proposed interpretation of the author’s beliefs.
I don’t think anybody intends to suggest stripping the agency from others. It’s clearly arrogant at best and insane at worst.
Instead, we’re talking about the operations of “civilization” as a technical concept—self-domestication, really. We are not at rights to do all that we might wish to because it is common knowledge that doing certain things does cause damage to others. We therefore collectively agree—implicitly and as a child, really, and the validity of mechanism here is something that could perhaps be debated, but its existence cannot be—that we will operate under certain rules and boundaries. The existence of these rules and boundaries allows us to feel safer around one another and operate more smoothly. Without them we would be forced to protect our own more clearly to the great detriment of society.
So Civilization means that we all operate under an implicit compact about what behavior is OK and we all agree to not build defenses against these things. And by “agree”, importantly, I clearly don’t mean that there’s a written contract somewhere. The exact notions of Civilization are vague and shifting and, importantly again, may be different in subgroups as they are from the population as a whole.
For instance, the compact says it’s alright to hug your close friends and family but not strangers. If you try to hug random strangers they may act violently toward it because they have established and maintained a defense to physical contact from strangers.
Anyway, long tangent aside, you obviously are not responsible for the feelings of others in the sense that you obviously do not have control over them. I agree with you completely here.
But you are responsible for the compact of Civilization and even the tiny variations which may exist in pockets of society like your workplace. If you violate this compact and thus cause harm to those around you—and yes, it was their choice to be open to this harm—then you are responsible for that.
And frankly, as demonstrated in this article, the typical way that Civilization accounts for those who persistently fail the compact is that it kicks them out.
[If this idea is interesting to you or seems copacetic to things that you feel or believe, then I highly recommend the following article by Kevin Simler. I feel I had some grasp for these ideas far before reading it, but the vocabulary and imagery developed really sticks with me in a great way: http://www.meltingasphalt.com/personhood-a-game-for-two-or-more-players/]
A responsibility to the social contract makes a lot more sense than a responsibility for others emotions. You control adhering to the social contract, hence can be responsible for it. Breaking the social contract can cause you to be judged regardless of if harm was imparted to anyone. Additionally, as you mentioned, it varies from place to place. I spent around a year in a half working on a tiny team building software in which bugs == dead human beings. His new watch words might be “Be Kind”, my teams was “Be Right”. The social contract on that team was all about brutal, unsparing honesty – both on the giving and receiving end.
My point (initially, all the way up there at the top) was that being “callously indifferent” and “emotionally manipulative” are the acts of the same underlying personality quarks. One is just significantly more effective, there is a reason we see so many sociopaths as top tier CEOs.
Also, the article was worth it just for the pictures but I didn’t really gain a new appreciation for anything, mostly basic social norms type stuff – I like the bit about the only way to make civilized people is with civilized people – the process oriented approach.
I think you’re essentially correct robots the difference between two kinds of social contracts, but I am unsure I can follow your assertion that one is strictly better than the other. I think it’s an incredibly complex question.
And ultimately, when it’s all said and done, this is what the story is about. A misunderstanding of the social contract led to unintentional damage to coworkers. The author was then asked to normalize.
I have been a manager for years in non-IT positions. The environment of the workplace was my responsibility and I for certain had control over the kind of environment I cultivated.
I am not a manager now (Gladly) but I am still responsible for how I make people feel about me. If I am not friendly or approachable than most people will feel I am not friendly nor approachable. I also go out of my way to help co-workers to complete their task that are not my responsibility. So people will feel like I am a team player. If people are behind in their own work they will not feel like I will criticize them, but I am there to help them out.
I strongly support what this article presented. and passed it on to my non-profit company’s upper management who has than sent it out to the rest of the management team. It is very important to understand that you have the ability to make people have a emotional response when they see you.
Well, we simply disagree than, I think your delusions of control over other peoples emotions is just that – a delusion, and harmful one at that. That said I hope you still try to be a good person, friendly and approachable, but not out of some odd idea that you can control how others feel. I have worked in large enough organizations to know that every action you take will likely make one person like you and one person hate you.
friendly or approachable
“Always talking with people, chatting, bullshitting… never working.”
I also go out of my way to help co-workers to complete their task that are not my responsibility.
“That guy props up these useless engineers, the reason they don’t get fired and replaced with decent engineers is that this guy is always propping them up, so everyone else has to do more work” <– this is a REAL problem in many organizations, hidden bad actors.
Just – anything you do is likely to make someone unhappy – that is life. Just be the person YOU think you should be.
I certainly agree with the message, but am fairly appalled by the story. Is lambasting coworkers anonymously a popular thing to do at Facebook? Do people not discuss their problems face-to-face? I can’t imagine working in such an environment.
Is lambasting coworkers anonymously a popular thing to do at Facebook? Do people not discuss their problems face-to-face?
I’ve never worked at Facebook, but I have worked in the industry for over 20 years. Many (most?) people generally try to avoid direct conflicts with their coworkers. Complaining about coworkers to your management is actually a pretty good way to go.
Are you referring to the statements his boss showed him? That sounds pretty tame, tbh, and I’ve worked in fairly nice places. I’m used to nominating a handful of people whose feedback I would like, but who actually have which feedback is secret. (Although in practice you can often guess because of different writing styles.)
That seems like an industry-standard performance review scheme (note: nobody in the industry has any idea how to do this right; Peter Seibel promised to give a talk on this which I’ve been lucky enough to see a preview of recently; it strongly influenced how we do perf review at my workplace).
A bunch of people (me included) consider anonymized peer reviews an antipattern; you can de-anonymize it, and it’s seen as a mechanism to rant about others without providing actionable feedback. In the OP’s case, the anonymized feedback cycle seems to have helped, but it’s not hard to imagine their past unkind self dismissing those notes as useless.
I usually cc’d the person the feedback was for, which helped me focus on only saying things I felt comfortable delivering to them directly.
Do the additional copies ever get GC’d?
I recognized a lot of those words! This looks fascinating.
And then? :)
Why I love phoronix: despite running the benchmarks on the same machine, the CPU is different for os x and Linux.
“We need a table with data. This is data. Put it in the table!”
I wonder if the reported cpu differences come from osx using cpu scaling, and the others not. I think the 2.6ghz is the non-turbo speed of the cpu, and the 3.10Ghz is the highest turbo speed.
It would be extra silly (for the test results) if the OSX system was the only one using cpu scaling properly.
Yeah, I almost never trust Phoronix, their stuff is always so dubious.
I do believe OS X is slower than Linux in benchmarks, Apple isn’t optimizing OS X as a server operating system. There are no surprises there.
Phoronix has gone out of their way to make all their benchmarking easily reproducible. You should check out some of their work:
All of the hardware was the same throughout testing: the reported differences on the automated table above just come down to differences in what the OS reports, such as the difference between the CPU base frequency and turbo frequency, etc
Exactly. So what the hell is the point of the table?
Sorry, I misunderstood your complaint. :)
The thing that surprises me most about this discussion is the fixation with private keys. So we have one bug that lets you read plaintext traffic. Then we have another bug that lets you read a private key, which lets you… read plaintext traffic… if you can intercept it. The second bug is “much worse” than the first?
(But I did start by demanding more clarity, so I can hardly complain when someone draws a fine line between A and B.)
Private key compromises are potentially much, much more serious than any other plaintext leak: they allow you to impersonate the legitimate keyholder.
In the context of TLS, to mount an effective attack, you would need to get past hostname verification, which would require compromising either the server’s DNS or the victim client’s DNS resolution.
No compromise required. DNS is a stateless protocol on a trivially spoofable transport.
I think “trivially” overstates the case significantly. Is this answer substantially incorrect?
Your link is correct as far as I know. I should probably reword what I said to “often trivial”. In any case, it’s not very technically difficult to exploit. A non-local attacker may need to do a little research into your ISP’s configured resolvers, and will rely on race conditions and counter a small amount of entropy that could take a non-trivial amount of time to successfully exploit.
What can or would you do if you could impersonate the legit keyholder?
Let’s say you managed to extract the victim’s email server’s private key using Heartbleed or some other means. Now, combined with the publicly available certificate of the email server, you can impersonate the victim’s email server, as long as you can convince the victim’s computer to connect to your server instead of the real one.
If you’re on a LAN with the victim (e.g. at the same coffee shop), you might be able to spoof their DNS directly. Otherwise, targeted client-side malware is probably the most practical way to mess with someone’s DNS these days.
Once you can get the victim’s browser talking to your server when they think they’re talking to their email server, you stand up a convincingly faked login page, capture their login/password attempts, then, to allay suspicion, move them along to the real email server, either by changing the DNS back, or reverse-proxying their traffic.
That sounds like a lot of work. Why wouldn’t I use something like heartbleed to simply steal passwords from the server? Leave the whole interception mess out of it.
The second bug is “much worse” than the first?
Depends, natch! With the assumption I’m attacking you:
How often do you rotate your private key? Every year when you pay your CA tax? Less often? Be honest…
Are you a good boy who uses ephemeral D-H, or do you use RSA (“forward secrecy-schmecrecy!”)
Does bug #2 let me hoover up the private key leaving you none the wiser? Or, at least, does it force you to dig through old packet dumps to notice after the fact?
And most important, who am I and who am I attacking - a common criminal whacking at gmail? Or a nation-state actor that doesn’t mind throwing an exploit at a valuable target?
I can hardly complain when someone draws a fine line between A and B.
I want a fine line too. No one ever defines threat models when talking about this stuff and it makes me angry.
Depending on who you’re worrying about and what you’re doing to mitigate risk, stealing the keys can be equivalent to reading plaintext. But under other models stealing the keys is far more valuable than just reading plaintext.
mea culpa: I believe quite strongly that argument cultures aren’t productive and am a great force for creating them if I don’t regularly keep an eye on my work interactions with others.
Arguing is so ingrained as the default mode of communication in tech that we often forget that other ways even exist!
So let’s replace it by an “Implementation Culture” and “Unregulated Doing” with the proviso that “If you make it, you maintain it”.
Hmm. Sounds sort of like Open Source at it’s best.
Me too. :/
Same and it’s a bummer.