Mo came to the US from Iran in the 1950s for college and ended up teaching high school math and physics in the small Kentucky town where I grew up. And at the time I was growing up, Kentucky – of all places! – was leading the nation in bringing computers to K12 schools and bootstrapping technology programs. By the time I was in fourth grade every school in the state was connected to the Internet and every student had email, which was a first for the country.
I became super interested in computers in second or third grade, and my teacher told Mr. Shams, who was one of the handful of people in the school district who actually understood them. Pretty soon I was staying after school while he taught me QBasic. Once I learned how to make the computer play music and draw a star field, I was absolutely hooked.
During middle and high school I kept learning more – HTML, then C, and even a bit of assembly – with Mo helping me learn the entire time.
Mo ended up retiring from teaching a few years after I graduated. He ended up becoming a volunteer firefighter & lifeguard for the city pool in his 70s. He was always an interesting guy.
For me it was all about being in the right place at the right time: having access to technology and having someone take the time to teach an eight year old how to make the computer do something really cool. I’ll always owe my handsome brother Mo for that.
Another Hetzner customer here, can’t wait for their response. The irony is I migrated from Linode to Hetzner after Akamai acquisition, and it looks like I might have to jump ship again.
I don’t do anything crazy, but now I’m slightly uncomfortable with my personal Jabber and Mastodon instances residing on Hetzner infra.
I don’t know, Switzerland comes to mind. Also, it’s slightly closer to my home country so the latency would be a few ms shorter which would be another win for me.
I’m going to wait for an official response (if any) before I make a decision.
True that, but that’s not my main concern here. A bigger concern is if Hetzner refuses to issue a public statement about this, and I simply don’t want to get caught up in a future potential MITM attack that might target a whole hypervisor my VM is on or a similar scenario.
At this point I think it’s clear to say that withdrawing a canary would be considered to have announced it, so might itself be prosecuted, even though that is itself an exciting example of forced speech. But if an organisation has taken the view that “intercepting and recording all communication with a channel used by thousands” I assume that organisation isn’t super interested in human rights.
Having hosted lots of servers at home, it is not free and would not protect against this form of attack. The government just has to wiretap you via your ISP rather than your VM host.
A crappy home server can be almost-free and little extra work, but doing it well involves a bit more dedication than using a VPS.
Except that it ain’t. You have to pay for the hardware, UPS, electricity and a business-grade Internet connection (residential connection can be fine most of the time, but not always, for various reasons) and then you have to spend your free time on monitoring, upgrades and the overall upkeep, which can be a lot or a little depending on how skilled you are. I mean, I’ve considered it myself many times and I’ve done it in high school for self-education, but I got a life in the meantime, so it’s not feasible for me right now.
Given you’re on Lobsters, there’s a likelihood approaching 100% that you already have the spare hardware. UPS is relatively inexpensive.
electricity and a business-grade Internet connection
The incremental cost of the electricity on top of what you already pay is nearly 0%. You don’t need to use a business-grade connection.
then you have to spend your free time on monitoring, upgrades and the overall upkeep,
You would have to do this with a VPS anyway.
but I got a life in the meantime, so it’s not feasible for me right now.
You already host personal internet services on a VPS on Hetzner. With all due respect, using this argument to justify not hosting at home isn’t very convincing.
Maintaining a couple of VMs on a hypervisor and a network managed by an enterprise-grade company is vastly easier than doing everything yourself, especially if you’re starting from scratch.
Also, self-hosting at home isn’t zero latency because I have to leave the house every now and then and I also travel to far away countries for business and pleasure. I’m not against self-hosting at home per se, but as with everything else in life - you gain some, you lose some.
Maintaining a couple of VMs on a hypervisor and a [virtual] network managed
Admittedly this is extra work on top of maintaining a VPS itself but not very much more on a relative basis.
self-hosting at home isn’t zero latency because I have to leave the house every now and then and I also travel to far away countries for business and pleasure.
This would be the case even if you hosted it in a VPS in a data center near you. If you’re optimizing for latency to your home, the optimal case is on your home network.
All the valid points, but those address just the technical/operational side of the equation. I also happen to live in a country that’s not really “democratic” by today’s standards, so even if I self host at home, set up a network and split horizon DNS, that still doesn’t make me protected from a good old search warrant.
Not that I’m doing anything that would get any government interested, but it’s about principles (we’re on Lobsters after all). I don’t want to just surrender my private info to just about anyone, let alone a government. The prospect of my keeping my stuff in Switzerland sounds the best course of action the more I think of it. Some fun weekend projects are on the horizon for me it looks like. Nice thing to happen in winter with less than optimal weather.
I also happen to live in a country that’s not really “democratic” by today’s standards
The prospect of my keeping my stuff in Switzerland sounds the best course of action the more I think of it.
These sensitivities are more on topic. If you expect to be specifically and forcefully targeted by a government it may be wiser to host in a country with a high respect for privacy. Since you don’t, then the only security advantage of self-hosting is to avoid casual mass surveillance (e.g. like what happened in the article). Hosting at home seems to be more resilient against that.
(not arguing what you should do personally, please use your free time in a way that suits you best)
You’re just as much at risk of lawful interception if you host at home as anywhere else, assuming you engage in activity that’s valid for such an order to be given.
Logistically it’s harder to do and easier to verify against. You can physically isolate your server and check for vulnerabilities. You can’t do the same with a VPS with the same degree of confidence.
Lawful interception targeting a system you host at home is far more likely to begin with agents of your government knocking at your door with a court order.
Maybe you consider this good because you know it’s happening. Maybe they’re going to seize every electronic device you own for forensic imaging while continuing to host your server themselves to capture more data from your users. It’s certainly a different threat model.
That wasn’t the scenario being discussed but even if it were, logistically that is much harder for a government to do (for many reasons) than for them to do it silently with the cooperation of your VPS host.
It recently came to light (https://notes.valdikss.org.ru/jabber.ru-mitm/) that Hetzner may have been compelled to participate in a man-in-the-middle operation on one of its customers.
Should customers expect a response in this matter?
Cheers
With that said, and like others have mentioned, if state-supported coercion of providers is part of your threat model, I think you’ll be better off self-hosting. IMO complying with the laws of one’s jurisdiction is not a malicious act, but a necessity to operate. Your may have better luck in Switzerland, but you also may be foregoing some technical features and availability, which may be more important to you/your users than mitigating the off chance your communications are scooped up in a MITM operation.
Dear Sir or Madam,
thank you for your request.
We can assure you that there has been no security incident at our company. There is no risk to data security, neither for you nor for our other customers. We take the fulfillment of our cont
ractual and legal obligations very seriously.
Mit freundlichen Grüßen / Kind regards
Legal Team
I interpret this to mean that they were legally compelled to MITM the customer.
My current job is mostly to overcoming byzantine business processes and bizarre standards set up by people long gone. i don’t think a chat bot will be of much help
I feel like Unix is moving extremely slowly but inexorably towards “everything emits a stream of NL JSON” as a solution to the problem of streaming objects.
My (extremely limited) experimentation with Parallel/curl leads me to believe that you managed to not build this on an eventually consistent datastore, which is more than I can say about a previous coworker’s attempt to build a distributed mutex system :-/
It wasn’t supposed to be eventually consistent – but when you chose DynamoDB as your backing store and don’t tell it you need strongly consistent reads, funny things happened when you were locking and unlocking under load.
Thankfully it was caught before it got anywhere near a production system. By which I mean “we rewrote what he was doing to run on a single system so it didn’t need distributed anything.”
The balance is a bit off. Planes are far more profitable than they should be (and more profitable than anything else) because the price paid to transport things is proportional to distance but you don’t need to build tracks or similar for planes. I generally end up with a very profitable airline subsidising my vanity train set.
Feels like they could fix that by having lots of heavy things or lots of low value bulky things (enough that you can’t run enough planes for them without loads of runways). Either format is a good match for trains and barges and a poor match for planes and kinda bad for lorries (trucks).
As I recall, the function for computing transport cost in OpenTTD (inherited from Transport Tycoon) is that goods have a fixed multiplier per unit (fixed size) and the multiply this by the distance and divide by the time. I think that you’d need to make the time-decay factor different for different goods, so iron can take ages to deliver, for example, but not passengers (time is weird in the game, train journeys often take days or weeks, boats can take a year). Boats currently do well with things like oil, where there’s just too much to carry by air, but it’s possible to ship all of the more lucrative cargo by air and make far more money than you can with trains.
I actually let OpenTTD take over my evening and spent quite a while playing and you’re sort of right! Money for goods is a base cost * amount * Manhattan distance between source and destination * decay factor based on speed and time sensitivity of the good.
I think the most confusing part of the game for me is the accelerated timespan. It makes it very hard to work out whether your goods will arrive on time because the units are all weird (it takes days or months for a large train to be loaded).
Hah! My high school’s IT department was unable to disable this for, well, reasons? Instead, the network admin would watch for the “offender” to strike and phone the teacher in the room where it came from, and have them yell at the class about misusing school computer resources and disrupting classes. Also, what amazes me to type now – none of the students had their own logins – it was one shared username/password, and as a result no one ever logged out.
Because of this highly secure setup, ahem some of us took to creating batch scripts that we could load & would sleep until the teacher had their planning period, then wake up and periodically send random messages across the school.
It wasn’t deleted - there’s an ongoing problem over the last few days where the first tweet of a thread doesn’t load on the thread view page. The original text of the linked tweet is this:
I’ve seen a lot of people asking “why does everyone think Twitter is doomed?”
As an SRE and sysadmin with 10+ years of industry experience, I wanted to write up a few scenarios that are real threats to the integrity of the bird site over the coming weeks.
It wasn’t deleted - there’s an ongoing problem over the last few days where the first tweet of a thread doesn’t load on the thread view page.
It’s been a problem over the last few weeks at least. Just refresh the page a few times and you should eventually see the tweet. Rather than the whole site going down at once, I expect these kinds of weird problems will start to appear and degrade Twitter slowly over time. Major props to their former infrastructure engineers/SREs for making the site resilient to the layoffs/firings though!
FWIW, I just tried to get my Twitter archive downloaded and I never received an SMS from the SMS verifier. I switched to verify by email and it went instantly. I also still haven’t received the archive itself. God knows how long that queue is…
I used to help run a fairly decent sized Mesos cluster – I think at our pre-AWS peak we were around 90-130 physical nodes.
It was great! It was the definition of infrastructure that “just ticked along”. So it got neglected, and people forgot about how to properly manage it. It just kept on keeping on with minimal to almost no oversight for many months while we got distracted with “business priorities”, and we all kinda forgot it was a thing.
Then one day one of our aggregator switches flaked out and all of a sudden our nice cluster ended up partitioned … two, or three ways? It’s been years, so the details are fuzzy, but I do remember
some stuff that was running still ran – but if you had dependencies on the other end of the partition there was lots of systems failing health checks & trying to get replacements to spin up
Zookeeper couldn’t establish a quorum and refused to elect a new leader so Mesos master went unavailable, meaning you didn’t get to schedule new jobs
a whole bunch of business critical batch processes wouldn’t start
we all ran around like madmen trying to figure out who knew enough about this cluster to fix it
It was a very painful lesson. As someone on one of these twitter threads posted, “asking ‘why hasn’t Twitter gone down yet?’ is like shooting the pilot and then saying they weren’t needed because the plane hasn’t crashed yet”.
There’s no limit to the size of company that can run on kube if you can run things across multiple clusters. The problem comes if you routinely have clusters get big rather than staying small.
I was thinking about that too, but I’m guessing that CFA has a fraction of the traffic of Target (especially this time of year). Love those sandwiches though…
I work at a shop with about 1k containers being managed by mesos and it is a breath of fresh air after having been forced to use k8s. There is so much less cognitive overhead to diagnosing operational issues. That said, I think any mesos ecosystem will be only as good as the tooling written around it. Setting up load balancing, for instance . . . just as easy to get wrong as right.
I wouldn’t have deleted that key on their behalf. If it was running some kind of critical service it would now be failing, and services might be at risk, services potentially critical to human life. It’s also Unauthorized Access to a Computer and you shouldn’t trust a corporation to not take legal action against you when it has the opportunity.
The blog appears to be ran by a British citizen who lives in London, so short of the US govt getting involved, there isn’t likely much Infosys could do, even if they got super duper upset about it.
US laws do not apply outside of the US, despite the US not always acting like that’s the case.
That said, I agree it wasn’t the best action they could have done, but hindsight is 20/20 and all.
US laws do not apply outside of the US, despite the US not always acting like that’s the case.
If you hack into something that’s hosted on US soil, or route traffic across US soil to do it, you can bet US law applies. The only question is whether the country you’re currently in will extradite you.
Or, more simply: laws still apply just fine on the internet and you probably rely on that being true, whether you realize it or not.
I completely agree that US laws apply on US soil, obviously they do. They just don’t apply outside the US at all, unless the other countries want them to apply. It’s the treaties and the UK’s willingness that matter here. It’s hard to say how the UK would handle this particular case, assuming the US govt got upset enough to bother the UK about it.
My comment that you are quoting was more about: The US govt can generally bully their way into whatever they want in most places on the planet, since they currently have the largest military and economy around.
The current UK prime minister is the son-in-law of the founder of infosys. So I don’t think it would take too much to inflict pain on the author of this blog.
My first reaction would be “surely they wouldn’t do anything so petty?” but then I remember who is running the UK at the moment and now I’m not so sure.
Any type of network or equipment that’s on US soil is, well, on US soil. Any sort of entity you affect that’s on US soil is on US soil. Lots of things are actually on US soil.
“But the person sending the bytes over the wire wasn’t in the US” doesn’t change that. At best it just means now two countries can each carry out a prosecution, and the person hopes the one they’re currently in won’t do that and won’t extradite.
This isn’t some sort of completely new unheard-of never-before-considered untested thing, either. Extradition treaties, and other procedures for handling people who think they’ll evade punishment by being on the other side of a border, is something that literally goes back millennia.
The only part I disagree with is: “At best it just means now two countries can each carry out a prosecution”.
This assumes the action is illegal in both countries. In this case, where the OP deleted the AWS key, that’s possible, but I wouldn’t say it’s certain. That’s for lawyers to fight over, if it ever gets that far.
If what you do passes through wires, networks, servers, routers, anything on US soil, then it was not “outside the US”.
Like I said to the other person: you probably, whether you realize/like it or not, rely on the fact that wherever you reside can in fact enforce its laws in this fashion, regardless of which country you reside in.
If this comes as a surprise to anyone, consider the story of CSE TransTel, a telecom company, and its parent company CSE Global Limited, both based in Singapore. CSE TransTel signed a contract to install communications equipment inside Iran, and paid purchase orders to Iranian companies to support delivery & installation of their equipment. They made their payments out of a Singapore-based bank.
What’s the problem, you ask? They made payments out of an account denominated in US dollars. These payments were processed through the US financial system: as a result, the US government argued that the actions of an entirely foreign company using entirely foreign banks resulted in financial institutions in the US handling payments to Iranian companies, which violates sanctions against Iran. This created a US nexus that made otherwise totally legal actions impermissible under US laws.
CSE TransTel settled with OFAC for twelve million dollars. Why? They’re based in Singapore?! If they didn’t, they’d end up listed as a specially designated national and any US company or person would be legally barred from working with them or risk OFAC sanctions of their own.
The US legal system and enforcement regimes will take a very broad determination of jurisdiction, and any company – web hosting, infrastructure, payments – with a US connection are legally required to fall in line.
From my other comment: The US govt can generally bully their way into whatever they want in most places on the planet, since they currently have the largest military and economy around.
Here CSE TransTel had to have known it was a bad idea to sell to Iran, since even their own government is less than pleased with Iran’s nuclear weapons program. They probably thought about it, and figured it was worth trying, got caught and eventually gave in, knowing their own govt wasn’t really on their side either.
I’m not necessarily against the US Govt’s bullying tactics, it helps the world just get stuff done sometimes, but it is a power they can(and arguably have) over-used sometimes.
You seem to have a very specific political axe to grind, but it’s not applicable here.
To see why, imagine there’s a building near an international border, and someone on the other side of the border throws a rock across and breaks a window in the building. The country the building was in can call it a violation of their laws, even though the person who threw the rock wasn’t on their soil. Whether the person who threw the rock will actually be punished by the country the building was in depends on the existence and details of extradition treaties, but nobody should be surprised if that person gets extradited to face consequences in the country where the building was.
The internet didn’t change anything about this. If you send bits over wires, and some of those wires are in another country, that country’s laws apply. It’s not “bullying” or some sort of new, unique, just-made-up recent thing. Like I already said in another reply, we’re talking about things that political and legal systems have been dealing with for literally thousands of years at this point. Rather: a lot of people hoped and wished and wanted the internet to somehow provide a new, never-before-seen type of extraterritorial place where those political and legal systems couldn’t reach, but their wanting and wishing didn’t and hasn’t made it so. Instead, long-existing frameworks have been adapted as needed, and that’s that.
I’m not currently an international lawyer and I haven’t read the whole thing, but skimming through it, it seems to say, In general, if it’s against the law in both countries, then they will automatically extradite people either direction. Which seems totally reasonable to me.
Nowhere in there does it say that US laws apply in the UK, as that is straight up ridiculous. An easy example of how ridiculous that is: Guns are generally illegal in the UK and are generally not illegal in the US.
You seem to be misunderstanding what I’m saying perhaps?
Over and over you single out one and only one country and talk about “bullying”.
Nowhere in there does it say that US laws apply in the UK, as that is straight up ridiculous.
The issue here is you are the one who is trying to argue that this is somehow “US law applying in the UK”. Not me.
I’ve explained to you multiple times now that it is an extremely normal and banal and accepted and uncontroversial idea that you can break the law of a country by committing acts that involve or have effect on entities or infrastructure in that country, even if your physical body was not physically within that country’s borders at the time.
But this is not the same as saying a particular country’s laws apply everywhere – thus the example of throwing a rock over the border and causing damage on the other side, which hopefully is a pretty clear and common-sense example of the underlying principle.
Over and over you single out one and only one country and talk about “bullying”.
Would s/bullying/interfering/g be a better word for you? The US is far from the only ones that do this type of behaviour. Generally it’s larger countries relative to smaller countries, that the US is the largest just makes them more effective at it.
The issue here is you are the one who is trying to argue that this is somehow “US law applying in the UK”. Not me.
Then I apologize for my part in our miscommunication. Though I find it very confusing that you think my position is that US law applies in the UK. Clearly we don’t seem to be communicating well during this course of conversation. With such gross miscommunication, it’s probably easier to just stop. Especially since the stakes for you and me are at worst some feelings being hurt. Have a pleasant and wonderful weekend!
I mean, it’s sketchy, but it does seem to be a key used for development, and which had been inactive for a whole year. Granted, anyone who screws up by issuing AdministratorAccess keys to individual developers might also run some critical service under them, but given the context (running some statistical models over externally-hosted records from several sources) it appears rather unlikely that it was used to run anything critical to human life. The key was, after all, used by Infosys to run things at their end, not by JH.
I don’t wanna defend what the author did, I’m, not sure I would’ve done it that way, either, but I do think it was quite safe to do from a technical standpoint. From a legal standpoint, based on my experience working with (and, sadly for my mental sanity, occasionally in) outsourcing companies, I doubt there is anyone at Infosys’ end who can a) read logs and b) is not on the verge of ragequitting, so there’s probably no one to notify the Legal team about it :-).
It seems like the author ended up doing that precisely because they couldn’t contact either JH or Infosys. There’s obviously no way to verify that, but I have been at the receiving end of the problem. Someone went public with several issues in a program that the company I was working for sold. The higher-ups got very butthurt, nasty press release came out…
…turned out the researcher had tried to contact them through several separate channels, but messages got ignored each time because they weren’t read by anyone who actually understood what was being said to them. One of the official channels for reporting security issues was mostly unused, because people usually went through unofficial channels. IIRC the people who supposedly monitored that channel weren’t even working there anymore. Dude ended up going public because he thought it was likely the only way to actually prevent anyone from getting harmed, despite incurring liability.
AFAIK no, and the whole thing was dropped like a very hot potato the moment people realized there had been as much as one attempt at responsible disclosure. I mean it’s not 1992, companies are legitimately expected to make this no more complicated than a couple of Google searches and an email.
Management is rarely inclined to litigate when there’s a looming PR disaster in it. A lawsuit moves slowly, even when coaxed with money and connections, whereas social media and the press operate on an hourly timetable. Realistically, there’s barely anything to gain from a lawsuit on a matter like this, and potentially a lot to lose in terms of PR and community relations – they only move forward if someone in the legal team really needs to prove themselves. Even the financial incentives are practically zero, the kind of sum they could get is probably in the sort of amount that companies like Infosys regularly write off for government bribes.
That’s my view as well. Infosys would be very stupid to raise a legal stink about this, as it would shine a light at their alleged incompetence at deploying code and responding to disclosures.
You’re right, but the flip side is reporting it properly, having them not do anything about it, and then a bad actor finds and uses it. Not much to recommend one over the other imo.
From what I’ve seen, you may run into careless business associates / sub-associates, but covered entities are often very wary of the risk around HIPAA violations. It sounded like the author attempted to report to Infosys directly so I’m not surprised he hit a wall.
So again, if you find PHI – "Johns Hopkins Hospital" "general counsel" into your favorite search engine took me straight to their legal department, including direct contacts to HIPAA lawyers. Even without specialist lawyers, just get in touch with someone in their legal / leadership chain. The magic happens when you say “I’d like to report a HIPAA violation” to a human, preferably a human on a legal team.
And if you truly can’t get anyone to act, HHS has a process to report complaints directly to them. It’ll likely take longer for them to act, but they have broad leeway to sanction bad actors and will get the attention of the offender.
All access to remote computers is unauthorized. Maybe we should stop allowing corporations to hurt themselves and others, even if it means violating their privacy.
As someone who works in the healthcare space – here’s your daily HIPAA primer.
In this case, HIPAA applies to Johns Hopkins as the covered entity. They’ll have a business associate agreement (BAA) in place with Infosys that allows Infosys to receive protected health information (PHI) and do stuff with it that the covered entity requests - ML stuff, in this case. If you ever stumble over PHI in the wild, while the business associate can be liable on their own, it’s often best to start with the covered entity. Look for their compliance/privacy/legal teams first – good infosec teams know what to do, but a privacy officer always does, even if they are a bad privacy officer.
When the covered entity hears about this, they’ll freak out, lock things down, and investigate to decide if they need to make a notification of a breach. There are flowcharts that help[0], but step zero - which is often not on them - is always to ask “did we actually disclose anything?” In this case, if Infosys has IAM/S3 access logs that can show nobody saw those records, they were never disclosed – the tree fell but nobody was in the woods to hear it.
If, however, Infosys can’t prove that? You have to assume a breach happened – this means sending a notification to every individual in those files & HHS no later than annually. If there were >500 people in the file? You additionally have to send a press release to local media & HHS within 60 days so you can appear on the wall of shame. Infosys & JHU get to fight out who pays any penalty, but assuming a BAA is in place it should roll to Infosys.
[0]: this is probably not a great flowchart because HHS dropped the “risk of harm” standard a while back, but it’s one of the ones I can find that’s public so it’s an example at least.
This is useful for people whose freedom of expression is threatened by repressive authorities [such as] activists for human rights in oppressive regimes. [Shufflecake] can manage multiple nested volumes per device, so to make deniability of the existence of these partitions really plausible.
I’ve heard it argued that this is actually exceptionally dangerous. If you’re under the thumb of a regime that is willing to use torture against dissidents, and a dissident is found to use Shufflecake, how do you get the beatings to stop?
Any OpenSSL 3.0 application that verifies X.509 certificates received from untrusted sources should be considered vulnerable. This includes TLS clients, and TLS servers that are configured to use TLS client authentication.
I won’t call this a nothingburger but there are a few things that make it look a bit less spicy for people running servers:
3.x only
Most TLS servers aren’t doing client authentication
From the advisory, if you are doing client TLS, the vulnerability is post-chain validation: so you have to get a public CA to sign a bad certificate (and at least you’ll see them in CT logs, wonder if anyone has searched!) or you have to skip cert validation
Even then, you have to get past mitigations to turn a DOS into an RCE
It’s still obviously bad, but it doesn’t seem to be “internet crippling”
Well said! While there was a lot of anxiety around this issue since it was marked CRITICAL (a-la-heartbleed), the OpenSSL team did post a rationale in their blog (see next response by @freddyb) for the severity downgrade (which aligns with your explanation as well).
I don’t understand why the OpenSSL maintainers didn’t downgrade the pre-announcement. People cleared their diaries for this; a post-pre-announcement saying “whoops it’s only a high, you can stop preparing for armageddon” might have saved thousands of hours, even if it only came on Monday.
Aren’t the OpenSSL maintainers basically one guy fulltime and some part-timers? Why are they expected to perform at the level of a fully-funded security response team? If they can save thousands of hours, shouldn’t they be funded accordingly?
I mean, it’s absolutely true in general that people making money out of FOSS should fund its development and maintenance, and to the extent this isn’t already true of OpenSSL, of course I think it should be fixed. But I think it’s wrong to couch everything in terms of money. Voluntary roles exist in lots of communities, and their voluntary nature doesn’t negate the responsibility that comes with them. If it’s wrong for big tech to profit by extracting free labour out of such social contracts—and I do think it is—it doesn’t seem much better to just assimilate all socially mediated labour into capitalism and have done.
But I also just think that if one makes a mistake that wastes lots of people’s time, it’s a nice gesture to try to give them that time back when the mistake is realised.
I think it’s wrong to couch everything in terms of money.
I wonder what you would say if your employer responded like this when you asked why you didn’t get your paycheck?
Voluntary roles exist in lots of communities, and their voluntary nature doesn’t negate the responsibility that comes with them.
Uh, what responsibility is that exactly? The OpenSSL people have gone above and beyond to handle this security issue and you’re complaining that they’re not fulfilling their ‘voluntary responsibility’ because they didn’t save a few hours of your time? Do you realize how entitled and churlish you sound? Do me a favour, don’t talk about Open Source until you develop a sense of gratitude for the immensity of the benefits you’re getting from the maintainers every day. And I’m not even talking about paying them money, since that seems too much to ask! Just some basic human respect and gratitude.
Uh, what responsibility is that exactly? The OpenSSL people have gone above and beyond to handle this security issue and you’re complaining that they’re not fulfilling their ‘voluntary responsibility’ because they didn’t save a few hours of your time?
It’s not my time personally that’s at issue here. I respect that the OpenSSL people have handled this security issue, and that they’ve stepped up to maintain an important thing. I do think that position comes with a responsibility not to raise false alarms if possible.
The OpenSSL maintainers are, like us, members of a community in which we all contribute what we can. I don’t think that gratitude and criticism are mutually exclusive, but I am sorry if I seemed too complainy, and I appreciate the people in question were probably operating under no small amount of pressure.
I do think that position comes with a responsibility not to raise false alarms if possible.
People are very quick to assign new responsibilities to people who give them free stuff. Time and again I find this pretty incredible. Like, here are some people making a public good on their own dime. And users feel so incredibly comfortable jumping in with critiques. They didn’t do this, they should have done it like that. Pretty easy to sit back, do nothing, and complain about others’ hard work.
You can certainly rent bare metal. This is most of OVH and Hetzners business. Both of them do it really cheaply. Though Hetzner only does cloud servers in the US, their bare metal stuff is in Europe only.
I find OVHs stuff to be pretty janky, so I’d hesitate to do much scale there. Equinix Metal is a pretty expensive, but probably quality option (I haven’t tried it).
There still exists businesses everywhere on the spectrum from an empty room to put your servers in to “here is a server with an OS on it.”
And even with your VM abstraction, someone has to worry about the disks and drivers and stuff. The decision is if you want to pay someone to do that for you, or do it yourself.
For the last couple of years rented a single bare metal server from OVH and installed FreeBSD on it and just used it for hobby stuff. I used zfs for the filesystem and volume management and bhyve for VMs. I ran several VMs on it, various Linux distributions and OpenBSD. It worked great, very few issues. The server cost me about $60/month.
But I eventually gave it up and just moved all that stuff to a combination of Hetzner cloud and Vulr because I didn’t want to deal with the maintenance.
In my experience it is cheaper than renting from AWS once you need more than a certain threshold. That threshold has stuck, rather consistently, at “about half a rack” from where I’ve observed it over the past 10 years or so.
So isn’t the problem is that for whatever reason, the market doesn’t have consistent pricing for physical servers? It might be LESS than AWS, but I think it’s also sometimes “call us and we’ll negotiate a price” ?
Yeah basically a lot of stuff in the datacenter and datacenter server/networking equipment space is “call us” for pricing.
The next step past Equinix metal is to buy colocation space and put your own servers in it.
And the discounts get stupid steep as your spend gets larger. Like I’ve seen 60-70% off list price for Juniper networking equipment. But this is when you’re spending millions of dollars a year.
When you’re buying a rack at a time from HPE or Dell at $250K - $500K/rack (numbers from when I was doing this 5+ years ago) you can get them to knock off 20-40% or something.
It can be pretty annoying because you have to go through this whole negotiation and you’ll greatly overpay unless you have experience or know people with experience to know what an actual fair price is.
At enough scale you have a whole procurement team (I’ve hired and built one of these teams before) who’s whole job is to negotiate with your vendors to get the best prices over the long term.
But if you’re doing much smaller scale, you can often get pretty good deals on one off servers from ProVantage or TigerDirect, but the prices jump around a LOT. It’s kind of like buying a Thinkpad direct from Lenovo where they are constantly having sales, but if you don’t hit the right sale, you’ll greatly overpay.
Overall price transparency is not there.
But this whole enterprise discount thing also exists with all the big cloud providers. Though you get into that at the $10s of millions per year in spend. With AWS you can negotiate a discount across almost all their products, and deeper discounts on some products. I’ve seen up to 30% off OnDemand EC2 instances. Other things like EBS they really won’t discount at all. I think they operate EBS basically at cost. And to get discounts on S3 you have to be storing many many PB.
But this whole enterprise discount thing also exists with all the big cloud providers. Though you get into that at the $10s of millions per year in spend.
AWS definitely has an interesting model that I’ve observed from both sides. In the small, they seem to like to funnel you into their EDP program that gives you a flat percentage off in exchange for an annual spend commitment. IME they like larger multi-year commitments as well, so you’ll get a better discount if you spend $6m/3 years than if you do three individual EDPs for $2m. But even then, they’ll start talking about enterprise discounts when you are willing to commit around $500k of spend, just don’t expect a big percentage ;)
When I once worked for a company with a very large AWS cloud spend - think “enough to buy time during Andy Jassy’s big keynote” - EDPs stopped being flat and became much more customized. I remember deep discounts to bandwidth, which makes sense because that’s so high margin for them.
It can be pretty annoying because you have to go through this whole negotiation and you’ll greatly overpay unless you have experience or know people with experience to know what an actual fair price is.
This is a key bit that people don’t realize. When I worked for a large ISP and was helping spec a brand new deployment to run OpenStack/Kubernetes I was negotiating list price from $$MM down to $MM. Mostly by putting out requests for bids to the various entities that sell the gear (knowing they won’t all provide the same exact specs/CPU’s/hard drives), then comparing and contrasting, taking the cheapest 4 and making them compete for business.
But its a lot of time and effort up front, and there has to be a ton of money handed over upfront. With the cloud that money is spent over time, rather than front loading it.
I think the threshold has been pretty consistent if you think of it in terms of what percentage of a rack they need to sell… people who rent servers out by the “unit” drop below the AWS prices once you occupy around half a rack. And yes, I’ve had to call them to get that pricing.
It’s a little annoying to have to call.
And furthermore, things can be cheaper at different points in a hardware cycle, so it’s a moving target.
I think some of it is down to people who peddle VMs being able to charge per “compute unit” but people who peddle servers (or fractions of servers) not being able to go quite that granular.
If you rent in “server” units, you need to be prepared to constantly renegotiate.
There are certainly businesses which are built on the idea that they are value-added datacenters, where the value is typically hardware leasing and maintenance, fast replacement of same, networking, and perhaps some managed services (NTP, DHCP, DNS, VLANs, firewalls, load balancers, proxies…)
“Single VM on a physical host” is a thing I’ve seen (for similar reasons you mention: a common/standard abstraction), not sure how often it’s used at this sort of scale though.
The thing I don’t like is thinking about disk hardware and drivers and that sort of thing. I kind of like the VM abstraction, but without VMs maybe :)
I think you trade one thing for another. You have to think about other things. If you want you could also run your own VMs of course, but honestly, you just have an OS there and that’s it and if you want to not think about storage you can just run MinIO or SeaweedFS, etc. on some big storage server and add as you go. And if you rent a dedicated server and your server really happens to have disk failure (which is usually in a raid anyways) you just have it replaced and use your failover machine, like you’d use your failover if your compute instance starts to have issues.
It’s not like AWS and others don’t have errors, it’s just that you see them differently and sometimes Amazon notices before you and they just start up that VM somewhere else (that works automated as well if you run your own VMs. That’s widely implemented technology and “old”). I see all of this as server errors, whether physical rented machine or virtual machine. In both situations I’d open some form of ticket and have the failover handle it. It’s not like cloud instances are magically immune and that stuff often breaks the abstraction as well. They might detect it, there however also is just a certain percentage of VMs/instances becoming unavailable or worse becoming half-unavailable, clearly doing something still despite it having been replaced and in my opinion that is a lot more annoying then knowing of any hardware thing, because with hardware issues you know how to react, with instances doing strange things it can be a very hard to verify anything at that layer of abstraction and eventually you will be passed along through AWS support. If you are lucky of course it’s enough to just replace it, but that is pretty much an option with hardware as well and dedicated hosting providers certainly go that route of automating all these things and are pretty much there. There’s hardware/bare metal clouds but to be fair, they are close, but still lag behind. Here I think in terms of infrastructure as code. I think slowly that’s coming to physical machines as well with the “bare metal cloud” topic. It just wasn’t the focus so much. I really hope that hosters keep pushing that and customers use and demand it. AWS datacenters becoming pretty much equivalent to “the internet” is scary.
But it’s certainly not in an area where if you use compute instances (rather than something way even higher level, in the realms of Heroku, Fly.io, etc.) that you created yourself it makes a huge difference. Same problem, different level of abstraction. Probably slightly more noticed on physical machines, because of the mentioned automated VM failover, that only works though in specific cases. In either case you will need someone that knows how the server works, be it virtual or physical.
What an excellent resource. I’ve been using Unbound for advertisement blocking for a while but have not been happy with the blocklist project lists I’ve seen recommended – I’d found them to be way overbroad.
One thing I worried about was this warning in unbound.conf(5)
log-replies: … Note that it takes time to print these lines which makes the server (significantly) slower
While I suspect the increased reporting will only help me, I’m worried about this warning. Anyone have experience here?
I’m absolutely not convinced by the Google Earth argument:
For example, the Google Earth app is written in a non-memory-safe language but it only takes input from the user and from a trusted first-party server, which reduces the security risk.
As long as that server isn’t compromised, or a rogue DNS server doesn’t direct clients to a host controller by attackers, or… PDF viewers also nominally only “take input from the user”, but I will run out of fingers if I try to count arbitrary code execution vulnerabilities in PDF clients.
Anything that can take arbitrary input must never trust that input blindly.
Note that the author worked in Google Earth team, and is speaking from experience.
I agree you should not trust input blindly, but it is a spectrum, isn’t it? I hope we can agree that memory safety is less important in Google Earth compared to Google Chrome. That’s all the author is saying.
I think by “take input from the user” he meant GUI events like click/drag/keystroke, which we can agree are pretty safe. (Mostly. I just remembered that “hey, type this short gibberish string into your terminal” meme that triggers a fork bomb…)
“Open this random PDF file” is technically a user event, but that event comes with a big payload of untrusted 3rd party data, so yeah, vulns galore.
I feel like Earth is a bad example, then. Google Earth lets you read/write your GIS data as a KMZ file. KMZ files are zipped(!) XML(!!) documents – that’s quite a bit of surface area for a malicious payload to work with.
😬 At least there probably aren’t too many people using that feature. Unless they start getting urgent emails from “the IT department” telling them “you need to update your GIS data in Google Earth right away!!! Download this .kmz file…”
But some of the people who do use that are very juicy targets who make heavy use of features like that, like people mapping human rights abuses or wars. It’s not like this software is just a toy that doesn’t see serious use.
Earth passed some pretty intense security reviews before it was launched and before every major feature release, and those security teams know their stuff.
Likely a big help was how well sandboxing works in WebAssembly, Android, and iOS.
I would take a step back and ask why you are making the code open source in the first place. If you want just to share your work with others, then pick whatever you want. As long as you own all of the intellectual property related to the code, you can pick whatever license you desire.
However, if you want your project to be adopted within a corporate environment, you can’t expect things outside their standard set to get a lot of traction. That set was picked by lawyers to reduce the risk of the company having future issues with clean intellectual property rights to their product. Even if it was adopted by a company before they were big enough to have lawyers who cared, one day they will grow, get acquired, IPO, and there will be a team of people running license checks for stuff outside of the approved set. That is especially true for relatively unknown licenses like the one in this case. At that point, they’re likely going to stop engineering work to replace the components affected, too, once again, reduce risk.
Here is a hypothetical. A company adopts this component with the license as it was; they get acquired by a large, multinational public company. There is not a lawyer that would read this license and agree to run down every aspect of this license and ensure they’re complying with it. Some are easy, but many are vague enough to be a pain. So instead, they tell engineering to yeet it from the product.
Given all of that, to answer your prompt, you don’t. Companies are not taking a risk on small open-source components. If you want to get the Hippocratic License added to the set of approved licenses, it is a Sisyphean effort. The only way I see it happening is if your project gets to the level of something like Kubernetes or Linux, which (in a catch) often doesn’t happen without corporate support.
why open-source it? clearly, to provide some benefit. it’s a useful library.
i personally don’t care a great deal about adoption; what i do care about is “good use”. i personally don’t want to support the military or fossil fuel companies, say. just like i wouldn’t work at those companies.
i’m curious to gauge peoples views about expressing such sentiments via licenses. it seems like the hippocratic license - https://firstdonoharm.dev/ - is a very clear approach to do this; yet it seems to be met with quite some anxiety by people who think tech should somehow be “neutral”. it’s long been shown that neutrality only rewards the privileged; to make social change one needs to step out, somehow.
so my question is, as a tech community at large, do we just completely give up on licenses? (aside from the standard few?) or is there some room to innovative; some way to create social change for ourselves, our users, and the broader community? and if so, what is that mechanism?
I’ll ask it a different way. In an ideal world, would a company change its policies to adopt your open source software? If you want to change corporate governance, I don’t think you do it with progressive open source licenses. No engineering leader is going to go to a board and ask them to change broad policy so they can use an open source library.
A plurality of US states – Delaware (the important one for corporate governance!) included – allow corporations to incorporate or reincorporate as a public benefit corporation. It’s conceivable that a corporation could be subject to enough pressure by its employees and shareholders that it would reincorporate as a B corporation.
But while I think a niche could exist in B corporations for software licensed under the Hippocratic license & similar, it’s important to not mix cause & effect: your Hippocratic licensed software may be eligible for selection by a company because they chose to become a B corp, but it strikes me as exceptionally unlikely that a company will ever become a B corp to use your Hippocratic licensed software.
i.e. we’re just taking about a simple license here, where the terms are of course only enforceable through some (hypothetical) law suit; i..e the license really just expresses some notion of personal preferences enforceable only if i feel like suing random companies that use it.
maybe one thing i could point out is the difference between a code of conduct and a license. we all feel (somewhat?) comfortable with a code of conduct expressing behaviour wanted in our spaces; why not licenses for those same desires?
only if i feel like suing random companies that use it.
maybe one thing i could point out is the difference between a code of conduct and a license
Corporate governance seems like the thing being discussed here. You hope to impact governance through clauses in a license. However, governance is not limited to the time when you decide to sue some companies. Companies are bound to various agreements which require them to make some attempt to mitigate risk so that they can achieve the outcomes that the owners desire. The result is that they pick and choose which risks they want to take on by limiting the number of licenses they support and the scope of these licenses.
Regular corporations (and, I suspect B-corps too) are unlikely to want to increase the number of risks they are dealing with by using software with the Hippocratic license. We already know that many companies rule out GPL and derivative licenses entirely just to limit their risk. Some will pick and choose, but only when they have resource to review and fit it into their business.
Above I used terms like “various agreements” because I don’t have the time to write in the level of the detail I’d like to. Agreements come in many forms and we care most about the explicit ones which are written like contracts. Some agreements are more implicit and while still important, I’m ignoring these to simplify. Agreements include but aren’t limited to:
Founding documents between the founders, or between the government and the founders.
Partnership agreements with others selling/integrating your product, or providing code for your product.
Agreements with organizations that represent employees.
Customer contracts.
Funding agreements with VCs, or banks.
For your license to succeed, you need to navigate all of these agreements. A license like MIT is relatively compatible because it’s limited in scope.
i mean, suppose you are a regular developer living your life, and you feel like sharing code. clearly, i don’t want to engage at the level you mention with anyone who uses the code.
licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?
licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?
There is no way to achieve what you desire to any great extent with your approach. The trade-offs are for you to decide.
I would posit that most people don’t want to have relationships based on the requirements of the license you put forth. If you want to define your relationships and engagement through that license for your code, or companies you run, then that’s 100% fine. Many types of small communities can be sustained with hard work.
When you go in that direction don’t expect other people to reciprocate in various ways that they can in the open source world through code use, testing, bug reporting, doc writing, etc. If you use MIT then you’ll open the door to a lot more collaboration and usage. For many people who have a livelihood depending on open source, this is the only approach. When your livelihood doesn’t depend on open source it’s easier to pick and choose licenses, but even then the decision can limit who will engage with you.
You’ve forgotten one more potential situation: you want other open source projects and people to be able to use it, but don’t care at all about corporate usage, or even want to discourage it.
In such situations, licenses like the unlicense, AGPL, Hippocratic license, etc can be useful.
rustfmt also provides many options to customize the style, but the style guide defines the defaults, and most projects use those defaults.
I’d be curious if the team thinks this gives a positive or negative experience to users.
One of the few unquestionably good experiences I’ve had working with a large Go project is that gofmt precludes all arguments about style. rustfmt has knobs for days, and I wonder how much turning they get.
The bigger difference between rustfmt and gofmt are not the knobs, but the fact that gofmt generally leaves new line placement to the programmer, while rustfmt completely disregards existing layout of code.
OTOH to me rustfmt’s “canonicalization” is a destructive force. There are many issues with it. One example: I strive to have semantically meaningful split of code into happy paths and error handling:
let result = try.to().do.something()
.or_else(report_failure).thx().bye?;
To me this gives clarity which code is part of the core algorithm doing the work, and which part is the necessary noise handling failures.
But with rustfmt this is pointlessly blended together in to a spaghetti with weirdo layout like this:
let result =
try.to().do.something().or_else(report_failure).thx().bye?;
or turned into a vertical sprawl without any logical grouping:
let result = try
.to()
.do
.something()
.or_else(report_failure)
.thx()
.bye?;
rustfmt doesn’t understand the code, only tokens. So it destroys human input in the name of primitive heuristics devoid of common sense.
I want to emphasise that rustfmt has a major design flaw, and it’s not merely “I don’t like tabs/spaces” nonsense. The bikeshed discussions about formatting have burned people so badly, that it has become a taboo to speak about code formatting at all, and we can’t even criticise formatting tools when they do a poor job.
As a user, I like that rustfmt has some knobs, it lets me adjust the coding style to use within my own project, so it’s internally consistent.
What matters to me, as a developer, is that the coding style within a project is consistent. I don’t care about different projects using different styles, as long as those are also internally consistent, and have a way to reformat to that style, it’s all good. I care little about having One True Style. I care about having a consistent style within a project. Rustfmt lets me have the consistent internal style, without forcing the exact same style on everyone.
It’s worth saying that Rust is targeting C and C++ programmers more than Go is, and at least in my experience it’s those programmers who have the most idiosyncratic and strongest opinions of what constitutes “good style.” “My way or the highway” may not work well with many of them.
I always like to give a concrete example that’s happened to me and does not involve finance :)
National drug codes (NDCs) identify finished drug products in the USA. 50580-600-02, as an example, uniquely identifies a ten-tablet blister pack of paracetamol manufactured by Johnson and Johnson.
Today. But because of the way the format is defined, there are a limited quantity of valid identifiers. As a result, the FDA will periodically expire old NDCs and reassign them. So five years ago, that NDC might’ve referred to another drug entirely. Or, the record you stored yesterday might’ve pointed to the wrong drug because a NDC reassignment happened that the pharmacy processed before you could.
In general, that’s pretty much how it works. We have a long-lived root in a vault that signs multiple leaf certs. The current leaf is in an environment that has some automation to sign client certificates. I think we run a OCSP responder to handle revocations, not positive, but we terminate most of those connections at a single box so it’s not a huge deal either way.
The biggest downside is teaching people on the other end how to use it. The tooling/dev experience around mTLS/client-certificates is not great. We offer both flavors – mTLS & API keys – and are finding API keys are the most preferred solution.
There’s so many slippery slopes in this comment section, I don’t know how I didn’t slip myself and cracked my skull in the floor.
“But If we outlaw nazism, where will it stop” At nazism. That’s where it stops. It’s what happened in Germany, Brazil. Probably several other places I’m too lazy to research right now.
“It’s a lynch mob, it’s the trolls now, who’s next” No one. Certainly not you, I.T. person more bland than mashed potatoes (this might be more a reflection of myself than the community but whatever). All the free speech absolutists get all worried anytime something like this happens, and yet, nothing apocalyptical follows. Trump was banned from twitter, but Bolsonaro and other 1000 lying politicians are still there.
So, maybe it’s a good thing that companies sometimes fold to public pressure, and maybe it doesn’t spell the end of free speech and democracy. In fact, maybe it’s not happening often enough.
As a leftist I find the slippery slope argument hilarious. “Oh no, you’re saying they’ll come for my speech next? I’ll be the victim of ‘mob violence?’” That’s been the state of affairs for centuries.
I think it’s also informative that the only people actually moved by the argument are believers in some naive rules based liberal world. Reactionaries and fascists gleefully rejoice in the slippery slope because they know when the chips are down they’ll be the ones pushing people down it.
Certainly not you, I.T. person more bland than mashed potatoes (this might be more a reflection of myself than the community but whatever)
❤️
To thine own self be true, Polonius. To thine own self be true.
A bit of humanity goes a long way, but sometimes that is hard to maintain focus on when we’re arguing about whether bomb threats are a valid form of protected free speech, and what responsibilities companies have for responding to their services being used for the same.
Mohammad Shams.
Mo came to the US from Iran in the 1950s for college and ended up teaching high school math and physics in the small Kentucky town where I grew up. And at the time I was growing up, Kentucky – of all places! – was leading the nation in bringing computers to K12 schools and bootstrapping technology programs. By the time I was in fourth grade every school in the state was connected to the Internet and every student had email, which was a first for the country.
I became super interested in computers in second or third grade, and my teacher told Mr. Shams, who was one of the handful of people in the school district who actually understood them. Pretty soon I was staying after school while he taught me QBasic. Once I learned how to make the computer play music and draw a star field, I was absolutely hooked.
During middle and high school I kept learning more – HTML, then C, and even a bit of assembly – with Mo helping me learn the entire time.
Mo ended up retiring from teaching a few years after I graduated. He ended up becoming a volunteer firefighter & lifeguard for the city pool in his 70s. He was always an interesting guy.
For me it was all about being in the right place at the right time: having access to technology and having someone take the time to teach an eight year old how to make the computer do something really cool. I’ll always owe my handsome brother Mo for that.
I have fond memories of doing rudimentary graphics with qbasic, absolutely fun stuff!
This is an excellent analysis.
As a Hetzner customer, I’m looking forward to their response. Whether this was a “lawful” MITM or not, their response will be of great interest.
Another Hetzner customer here, can’t wait for their response. The irony is I migrated from Linode to Hetzner after Akamai acquisition, and it looks like I might have to jump ship again.
I don’t do anything crazy, but now I’m slightly uncomfortable with my personal Jabber and Mastodon instances residing on Hetzner infra.
Where would you go? Most western countries have the same laws that would require a provider to cooperate
I don’t know, Switzerland comes to mind. Also, it’s slightly closer to my home country so the latency would be a few ms shorter which would be another win for me.
I’m going to wait for an official response (if any) before I make a decision.
If the German government is sending a national security letter over your Mastodon instance’s VPS, I think you might have bigger problems.
True that, but that’s not my main concern here. A bigger concern is if Hetzner refuses to issue a public statement about this, and I simply don’t want to get caught up in a future potential MITM attack that might target a whole hypervisor my VM is on or a similar scenario.
They may legally not be able to
Would a canary work from now on, though?
At this point I think it’s clear to say that withdrawing a canary would be considered to have announced it, so might itself be prosecuted, even though that is itself an exciting example of forced speech. But if an organisation has taken the view that “intercepting and recording all communication with a channel used by thousands” I assume that organisation isn’t super interested in human rights.
If this is a “national security” thing, Hetzner are likely under a gag order.
Latency is zero if you host at home. It’s also free and much more secure.
Having hosted lots of servers at home, it is not free and would not protect against this form of attack. The government just has to wiretap you via your ISP rather than your VM host.
A crappy home server can be almost-free and little extra work, but doing it well involves a bit more dedication than using a VPS.
Except that it ain’t. You have to pay for the hardware, UPS, electricity and a business-grade Internet connection (residential connection can be fine most of the time, but not always, for various reasons) and then you have to spend your free time on monitoring, upgrades and the overall upkeep, which can be a lot or a little depending on how skilled you are. I mean, I’ve considered it myself many times and I’ve done it in high school for self-education, but I got a life in the meantime, so it’s not feasible for me right now.
Given you’re on Lobsters, there’s a likelihood approaching 100% that you already have the spare hardware. UPS is relatively inexpensive.
The incremental cost of the electricity on top of what you already pay is nearly 0%. You don’t need to use a business-grade connection.
You would have to do this with a VPS anyway.
You already host personal internet services on a VPS on Hetzner. With all due respect, using this argument to justify not hosting at home isn’t very convincing.
Maintaining a couple of VMs on a hypervisor and a network managed by an enterprise-grade company is vastly easier than doing everything yourself, especially if you’re starting from scratch.
Also, self-hosting at home isn’t zero latency because I have to leave the house every now and then and I also travel to far away countries for business and pleasure. I’m not against self-hosting at home per se, but as with everything else in life - you gain some, you lose some.
Admittedly this is extra work on top of maintaining a VPS itself but not very much more on a relative basis.
This would be the case even if you hosted it in a VPS in a data center near you. If you’re optimizing for latency to your home, the optimal case is on your home network.
All the valid points, but those address just the technical/operational side of the equation. I also happen to live in a country that’s not really “democratic” by today’s standards, so even if I self host at home, set up a network and split horizon DNS, that still doesn’t make me protected from a good old search warrant.
Not that I’m doing anything that would get any government interested, but it’s about principles (we’re on Lobsters after all). I don’t want to just surrender my private info to just about anyone, let alone a government. The prospect of my keeping my stuff in Switzerland sounds the best course of action the more I think of it. Some fun weekend projects are on the horizon for me it looks like. Nice thing to happen in winter with less than optimal weather.
These sensitivities are more on topic. If you expect to be specifically and forcefully targeted by a government it may be wiser to host in a country with a high respect for privacy. Since you don’t, then the only security advantage of self-hosting is to avoid casual mass surveillance (e.g. like what happened in the article). Hosting at home seems to be more resilient against that.
(not arguing what you should do personally, please use your free time in a way that suits you best)
You’re just as much at risk of lawful interception if you host at home as anywhere else, assuming you engage in activity that’s valid for such an order to be given.
Logistically it’s harder to do and easier to verify against. You can physically isolate your server and check for vulnerabilities. You can’t do the same with a VPS with the same degree of confidence.
Lawful interception targeting a system you host at home is far more likely to begin with agents of your government knocking at your door with a court order.
Maybe you consider this good because you know it’s happening. Maybe they’re going to seize every electronic device you own for forensic imaging while continuing to host your server themselves to capture more data from your users. It’s certainly a different threat model.
That wasn’t the scenario being discussed but even if it were, logistically that is much harder for a government to do (for many reasons) than for them to do it silently with the cooperation of your VPS host.
We arrived at Hetzner by the same path!
I reached out to Hetzner asking whether they’ll comment on the matter. Maybe you can do the same?
https://www.hetzner.com/support-form
With that said, and like others have mentioned, if state-supported coercion of providers is part of your threat model, I think you’ll be better off self-hosting. IMO complying with the laws of one’s jurisdiction is not a malicious act, but a necessity to operate. Your may have better luck in Switzerland, but you also may be foregoing some technical features and availability, which may be more important to you/your users than mitigating the off chance your communications are scooped up in a MITM operation.
Yeah, I did ping them, but so far no response from them. Pretty much everything suggests this was a gag order by a government entity.
This was their response
I interpret this to mean that they were legally compelled to MITM the customer.
My current job is mostly to overcoming byzantine business processes and bizarre standards set up by people long gone. i don’t think a chat bot will be of much help
Ha, been there. You might try ELIZA to help unburden yourself.
I feel like Unix is moving extremely slowly but inexorably towards “everything emits a stream of NL JSON” as a solution to the problem of streaming objects.
What’s NL? Does it mean that there is one JSON object for each NewLine?
Yeah, I’d heard it named ndjson, but same deal.
My (extremely limited) experimentation with Parallel/curl leads me to believe that you managed to not build this on an eventually consistent datastore, which is more than I can say about a previous coworker’s attempt to build a distributed mutex system :-/
Fun!
It’s implemented on top of sqlite.
Also. What. An eventually consistent mutex? That’s a new one.
It wasn’t supposed to be eventually consistent – but when you chose DynamoDB as your backing store and don’t tell it you need strongly consistent reads, funny things happened when you were locking and unlocking under load.
Thankfully it was caught before it got anywhere near a production system. By which I mean “we rewrote what he was doing to run on a single system so it didn’t need distributed anything.”
OpenTTD is a game I want to love, but I always end up ragequitting after what I think is a carefully designed rail line turns into spaghetti deadlock.
But maybe this time will be different…
The balance is a bit off. Planes are far more profitable than they should be (and more profitable than anything else) because the price paid to transport things is proportional to distance but you don’t need to build tracks or similar for planes. I generally end up with a very profitable airline subsidising my vanity train set.
Feels like they could fix that by having lots of heavy things or lots of low value bulky things (enough that you can’t run enough planes for them without loads of runways). Either format is a good match for trains and barges and a poor match for planes and kinda bad for lorries (trucks).
As I recall, the function for computing transport cost in OpenTTD (inherited from Transport Tycoon) is that goods have a fixed multiplier per unit (fixed size) and the multiply this by the distance and divide by the time. I think that you’d need to make the time-decay factor different for different goods, so iron can take ages to deliver, for example, but not passengers (time is weird in the game, train journeys often take days or weeks, boats can take a year). Boats currently do well with things like oil, where there’s just too much to carry by air, but it’s possible to ship all of the more lucrative cargo by air and make far more money than you can with trains.
I actually let OpenTTD take over my evening and spent quite a while playing and you’re sort of right! Money for goods is a base cost * amount * Manhattan distance between source and destination * decay factor based on speed and time sensitivity of the good.
You can read the rules here: https://wiki.openttd.org/en/Manual/Game%20Mechanics/Cargo%20income
I think the most confusing part of the game for me is the accelerated timespan. It makes it very hard to work out whether your goods will arrive on time because the units are all weird (it takes days or months for a large train to be loaded).
Hah! My high school’s IT department was unable to disable this for, well, reasons? Instead, the network admin would watch for the “offender” to strike and phone the teacher in the room where it came from, and have them yell at the class about misusing school computer resources and disrupting classes. Also, what amazes me to type now – none of the students had their own logins – it was one shared username/password, and as a result no one ever logged out.
Because of this highly secure setup, ahem some of us took to creating batch scripts that we could load & would sleep until the teacher had their planning period, then wake up and periodically send random messages across the school.
Why Twitter didn’t go down … yet
I was hoping for some insights into the failure modes and timelines to expect from losing so many staff.
This thread https://twitter.com/atax1a/status/1594880931042824192 has some interesting peeks into some of the infrastructure underneath Mesos / Aurora.
I also liked this thread a lot: https://twitter.com/mosquitocapital/status/1593541177965678592
And yesterday it was possible to post entire movies (in few-minute snippets) in Twitter, because the copyright enforcement systems were broken.
That tweet got deleted. At this point it’s probably better to archive them and post links of that.
It wasn’t deleted - there’s an ongoing problem over the last few days where the first tweet of a thread doesn’t load on the thread view page. The original text of the linked tweet is this:
It’s been a problem over the last few weeks at least. Just refresh the page a few times and you should eventually see the tweet. Rather than the whole site going down at once, I expect these kinds of weird problems will start to appear and degrade Twitter slowly over time. Major props to their former infrastructure engineers/SREs for making the site resilient to the layoffs/firings though!
Not only to the infra/SREs but also to the backend engineers. Much of the built-in fault-tolerance of the stack was created by them.
https://threadreaderapp.com/thread/1593541177965678592.html
I have this URL archived too, but it seems to still be working.
hm, most likely someone would have a mastodon bridge following these accounts RT-ing :-)
FWIW, I just tried to get my Twitter archive downloaded and I never received an SMS from the SMS verifier. I switched to verify by email and it went instantly. I also still haven’t received the archive itself. God knows how long that queue is…
I think it took about 2 or 3 days for my archive to arrive last week.
oh, so they still run mesos? thought everyone had by now switched to k8s…
I used to help run a fairly decent sized Mesos cluster – I think at our pre-AWS peak we were around 90-130 physical nodes.
It was great! It was the definition of infrastructure that “just ticked along”. So it got neglected, and people forgot about how to properly manage it. It just kept on keeping on with minimal to almost no oversight for many months while we got distracted with “business priorities”, and we all kinda forgot it was a thing.
Then one day one of our aggregator switches flaked out and all of a sudden our nice cluster ended up partitioned … two, or three ways? It’s been years, so the details are fuzzy, but I do remember
It was a very painful lesson. As someone on one of these twitter threads posted, “asking ‘why hasn’t Twitter gone down yet?’ is like shooting the pilot and then saying they weren’t needed because the plane hasn’t crashed yet”.
Twitter is well beyond the scale where k8s is a plausible option.
I wonder what is the largest company that primarily runs on k8s. The biggest I can think of is Target.
There’s no limit to the size of company that can run on kube if you can run things across multiple clusters. The problem comes if you routinely have clusters get big rather than staying small.
Alibaba, probably.
Oh, I didn’t realize that was their main platform.
Chick-fil-a, perhaps..
I was thinking about that too, but I’m guessing that CFA has a fraction of the traffic of Target (especially this time of year). Love those sandwiches though…
Had they done so, I bet they’d already be down :D
I work at a shop with about 1k containers being managed by mesos and it is a breath of fresh air after having been forced to use k8s. There is so much less cognitive overhead to diagnosing operational issues. That said, I think any mesos ecosystem will be only as good as the tooling written around it. Setting up load balancing, for instance . . . just as easy to get wrong as right.
I wouldn’t have deleted that key on their behalf. If it was running some kind of critical service it would now be failing, and services might be at risk, services potentially critical to human life. It’s also Unauthorized Access to a Computer and you shouldn’t trust a corporation to not take legal action against you when it has the opportunity.
The blog appears to be ran by a British citizen who lives in London, so short of the US govt getting involved, there isn’t likely much Infosys could do, even if they got super duper upset about it.
US laws do not apply outside of the US, despite the US not always acting like that’s the case.
That said, I agree it wasn’t the best action they could have done, but hindsight is 20/20 and all.
If you hack into something that’s hosted on US soil, or route traffic across US soil to do it, you can bet US law applies. The only question is whether the country you’re currently in will extradite you.
Or, more simply: laws still apply just fine on the internet and you probably rely on that being true, whether you realize it or not.
I completely agree that US laws apply on US soil, obviously they do. They just don’t apply outside the US at all, unless the other countries want them to apply. It’s the treaties and the UK’s willingness that matter here. It’s hard to say how the UK would handle this particular case, assuming the US govt got upset enough to bother the UK about it.
My comment that you are quoting was more about: The US govt can generally bully their way into whatever they want in most places on the planet, since they currently have the largest military and economy around.
The current UK prime minister is the son-in-law of the founder of infosys. So I don’t think it would take too much to inflict pain on the author of this blog.
Wow, that’s unfortunate for the OP. Though at the rate the UK is currently going through prime minsters, that may change tomorrow.
My first reaction would be “surely they wouldn’t do anything so petty?” but then I remember who is running the UK at the moment and now I’m not so sure.
Any type of network or equipment that’s on US soil is, well, on US soil. Any sort of entity you affect that’s on US soil is on US soil. Lots of things are actually on US soil.
“But the person sending the bytes over the wire wasn’t in the US” doesn’t change that. At best it just means now two countries can each carry out a prosecution, and the person hopes the one they’re currently in won’t do that and won’t extradite.
This isn’t some sort of completely new unheard-of never-before-considered untested thing, either. Extradition treaties, and other procedures for handling people who think they’ll evade punishment by being on the other side of a border, is something that literally goes back millennia.
The only part I disagree with is: “At best it just means now two countries can each carry out a prosecution”.
This assumes the action is illegal in both countries. In this case, where the OP deleted the AWS key, that’s possible, but I wouldn’t say it’s certain. That’s for lawyers to fight over, if it ever gets that far.
US law does not apply outside the US, some Americans just think it does.
If what you do passes through wires, networks, servers, routers, anything on US soil, then it was not “outside the US”.
Like I said to the other person: you probably, whether you realize/like it or not, rely on the fact that wherever you reside can in fact enforce its laws in this fashion, regardless of which country you reside in.
If this comes as a surprise to anyone, consider the story of CSE TransTel, a telecom company, and its parent company CSE Global Limited, both based in Singapore. CSE TransTel signed a contract to install communications equipment inside Iran, and paid purchase orders to Iranian companies to support delivery & installation of their equipment. They made their payments out of a Singapore-based bank.
What’s the problem, you ask? They made payments out of an account denominated in US dollars. These payments were processed through the US financial system: as a result, the US government argued that the actions of an entirely foreign company using entirely foreign banks resulted in financial institutions in the US handling payments to Iranian companies, which violates sanctions against Iran. This created a US nexus that made otherwise totally legal actions impermissible under US laws.
CSE TransTel settled with OFAC for twelve million dollars. Why? They’re based in Singapore?! If they didn’t, they’d end up listed as a specially designated national and any US company or person would be legally barred from working with them or risk OFAC sanctions of their own.
The US legal system and enforcement regimes will take a very broad determination of jurisdiction, and any company – web hosting, infrastructure, payments – with a US connection are legally required to fall in line.
From my other comment: The US govt can generally bully their way into whatever they want in most places on the planet, since they currently have the largest military and economy around.
Here CSE TransTel had to have known it was a bad idea to sell to Iran, since even their own government is less than pleased with Iran’s nuclear weapons program. They probably thought about it, and figured it was worth trying, got caught and eventually gave in, knowing their own govt wasn’t really on their side either.
I’m not necessarily against the US Govt’s bullying tactics, it helps the world just get stuff done sometimes, but it is a power they can(and arguably have) over-used sometimes.
You seem to have a very specific political axe to grind, but it’s not applicable here.
To see why, imagine there’s a building near an international border, and someone on the other side of the border throws a rock across and breaks a window in the building. The country the building was in can call it a violation of their laws, even though the person who threw the rock wasn’t on their soil. Whether the person who threw the rock will actually be punished by the country the building was in depends on the existence and details of extradition treaties, but nobody should be surprised if that person gets extradited to face consequences in the country where the building was.
The internet didn’t change anything about this. If you send bits over wires, and some of those wires are in another country, that country’s laws apply. It’s not “bullying” or some sort of new, unique, just-made-up recent thing. Like I already said in another reply, we’re talking about things that political and legal systems have been dealing with for literally thousands of years at this point. Rather: a lot of people hoped and wished and wanted the internet to somehow provide a new, never-before-seen type of extraterritorial place where those political and legal systems couldn’t reach, but their wanting and wishing didn’t and hasn’t made it so. Instead, long-existing frameworks have been adapted as needed, and that’s that.
no? You seem to be misunderstanding what I’m saying perhaps? I’m a little confused by this comment.
Anyways, The US and the UK have an extradition treaty, and the UK government is happy to publish it here: https://www.gov.uk/government/publications/extradition-treaty-between-the-uk-and-the-usa-with-exchange-of-notes
I’m not currently an international lawyer and I haven’t read the whole thing, but skimming through it, it seems to say, In general, if it’s against the law in both countries, then they will automatically extradite people either direction. Which seems totally reasonable to me.
Nowhere in there does it say that US laws apply in the UK, as that is straight up ridiculous. An easy example of how ridiculous that is: Guns are generally illegal in the UK and are generally not illegal in the US.
Over and over you single out one and only one country and talk about “bullying”.
The issue here is you are the one who is trying to argue that this is somehow “US law applying in the UK”. Not me.
I’ve explained to you multiple times now that it is an extremely normal and banal and accepted and uncontroversial idea that you can break the law of a country by committing acts that involve or have effect on entities or infrastructure in that country, even if your physical body was not physically within that country’s borders at the time.
But this is not the same as saying a particular country’s laws apply everywhere – thus the example of throwing a rock over the border and causing damage on the other side, which hopefully is a pretty clear and common-sense example of the underlying principle.
Would s/bullying/interfering/g be a better word for you? The US is far from the only ones that do this type of behaviour. Generally it’s larger countries relative to smaller countries, that the US is the largest just makes them more effective at it.
Then I apologize for my part in our miscommunication. Though I find it very confusing that you think my position is that US law applies in the UK. Clearly we don’t seem to be communicating well during this course of conversation. With such gross miscommunication, it’s probably easier to just stop. Especially since the stakes for you and me are at worst some feelings being hurt. Have a pleasant and wonderful weekend!
I mean, it’s sketchy, but it does seem to be a key used for development, and which had been inactive for a whole year. Granted, anyone who screws up by issuing AdministratorAccess keys to individual developers might also run some critical service under them, but given the context (running some statistical models over externally-hosted records from several sources) it appears rather unlikely that it was used to run anything critical to human life. The key was, after all, used by Infosys to run things at their end, not by JH.
I don’t wanna defend what the author did, I’m, not sure I would’ve done it that way, either, but I do think it was quite safe to do from a technical standpoint. From a legal standpoint, based on my experience working with (and, sadly for my mental sanity, occasionally in) outsourcing companies, I doubt there is anyone at Infosys’ end who can a) read logs and b) is not on the verge of ragequitting, so there’s probably no one to notify the Legal team about it :-).
It might seem that way, but there was no way for the author to know. They should have reported to infosys and Johns Hopkins.
As it is, the author has potentially harmed people and/or incurred liability.
It seems like the author ended up doing that precisely because they couldn’t contact either JH or Infosys. There’s obviously no way to verify that, but I have been at the receiving end of the problem. Someone went public with several issues in a program that the company I was working for sold. The higher-ups got very butthurt, nasty press release came out…
…turned out the researcher had tried to contact them through several separate channels, but messages got ignored each time because they weren’t read by anyone who actually understood what was being said to them. One of the official channels for reporting security issues was mostly unused, because people usually went through unofficial channels. IIRC the people who supposedly monitored that channel weren’t even working there anymore. Dude ended up going public because he thought it was likely the only way to actually prevent anyone from getting harmed, despite incurring liability.
Were there any legal consequences?
AFAIK no, and the whole thing was dropped like a very hot potato the moment people realized there had been as much as one attempt at responsible disclosure. I mean it’s not 1992, companies are legitimately expected to make this no more complicated than a couple of Google searches and an email.
Management is rarely inclined to litigate when there’s a looming PR disaster in it. A lawsuit moves slowly, even when coaxed with money and connections, whereas social media and the press operate on an hourly timetable. Realistically, there’s barely anything to gain from a lawsuit on a matter like this, and potentially a lot to lose in terms of PR and community relations – they only move forward if someone in the legal team really needs to prove themselves. Even the financial incentives are practically zero, the kind of sum they could get is probably in the sort of amount that companies like Infosys regularly write off for government bribes.
That’s my view as well. Infosys would be very stupid to raise a legal stink about this, as it would shine a light at their alleged incompetence at deploying code and responding to disclosures.
You’re right, but the flip side is reporting it properly, having them not do anything about it, and then a bad actor finds and uses it. Not much to recommend one over the other imo.
From what I’ve seen, you may run into careless business associates / sub-associates, but covered entities are often very wary of the risk around HIPAA violations. It sounded like the author attempted to report to Infosys directly so I’m not surprised he hit a wall.
So again, if you find PHI –
"Johns Hopkins Hospital" "general counsel"into your favorite search engine took me straight to their legal department, including direct contacts to HIPAA lawyers. Even without specialist lawyers, just get in touch with someone in their legal / leadership chain. The magic happens when you say “I’d like to report a HIPAA violation” to a human, preferably a human on a legal team.And if you truly can’t get anyone to act, HHS has a process to report complaints directly to them. It’ll likely take longer for them to act, but they have broad leeway to sanction bad actors and will get the attention of the offender.
On the other hand, people not living in USA might not be be so intimately familiar with USA laws and compliance culture.
All access to remote computers is unauthorized. Maybe we should stop allowing corporations to hurt themselves and others, even if it means violating their privacy.
Ah, bodyshops gonna bodyshop.
As someone who works in the healthcare space – here’s your daily HIPAA primer.
In this case, HIPAA applies to Johns Hopkins as the covered entity. They’ll have a business associate agreement (BAA) in place with Infosys that allows Infosys to receive protected health information (PHI) and do stuff with it that the covered entity requests - ML stuff, in this case. If you ever stumble over PHI in the wild, while the business associate can be liable on their own, it’s often best to start with the covered entity. Look for their compliance/privacy/legal teams first – good infosec teams know what to do, but a privacy officer always does, even if they are a bad privacy officer.
When the covered entity hears about this, they’ll freak out, lock things down, and investigate to decide if they need to make a notification of a breach. There are flowcharts that help[0], but step zero - which is often not on them - is always to ask “did we actually disclose anything?” In this case, if Infosys has IAM/S3 access logs that can show nobody saw those records, they were never disclosed – the tree fell but nobody was in the woods to hear it.
If, however, Infosys can’t prove that? You have to assume a breach happened – this means sending a notification to every individual in those files & HHS no later than annually. If there were >500 people in the file? You additionally have to send a press release to local media & HHS within 60 days so you can appear on the wall of shame. Infosys & JHU get to fight out who pays any penalty, but assuming a BAA is in place it should roll to Infosys.
[0]: this is probably not a great flowchart because HHS dropped the “risk of harm” standard a while back, but it’s one of the ones I can find that’s public so it’s an example at least.
I’ve heard it argued that this is actually exceptionally dangerous. If you’re under the thumb of a regime that is willing to use torture against dissidents, and a dissident is found to use Shufflecake, how do you get the beatings to stop?
I won’t call this a nothingburger but there are a few things that make it look a bit less spicy for people running servers:
It’s still obviously bad, but it doesn’t seem to be “internet crippling”
Well said! While there was a lot of anxiety around this issue since it was marked CRITICAL (a-la-heartbleed), the OpenSSL team did post a rationale in their blog (see next response by @freddyb) for the severity downgrade (which aligns with your explanation as well).
I don’t understand why the OpenSSL maintainers didn’t downgrade the pre-announcement. People cleared their diaries for this; a post-pre-announcement saying “whoops it’s only a high, you can stop preparing for armageddon” might have saved thousands of hours, even if it only came on Monday.
Aren’t the OpenSSL maintainers basically one guy fulltime and some part-timers? Why are they expected to perform at the level of a fully-funded security response team? If they can save thousands of hours, shouldn’t they be funded accordingly?
I mean, it’s absolutely true in general that people making money out of FOSS should fund its development and maintenance, and to the extent this isn’t already true of OpenSSL, of course I think it should be fixed. But I think it’s wrong to couch everything in terms of money. Voluntary roles exist in lots of communities, and their voluntary nature doesn’t negate the responsibility that comes with them. If it’s wrong for big tech to profit by extracting free labour out of such social contracts—and I do think it is—it doesn’t seem much better to just assimilate all socially mediated labour into capitalism and have done.
But I also just think that if one makes a mistake that wastes lots of people’s time, it’s a nice gesture to try to give them that time back when the mistake is realised.
I wonder what you would say if your employer responded like this when you asked why you didn’t get your paycheck?
Uh, what responsibility is that exactly? The OpenSSL people have gone above and beyond to handle this security issue and you’re complaining that they’re not fulfilling their ‘voluntary responsibility’ because they didn’t save a few hours of your time? Do you realize how entitled and churlish you sound? Do me a favour, don’t talk about Open Source until you develop a sense of gratitude for the immensity of the benefits you’re getting from the maintainers every day. And I’m not even talking about paying them money, since that seems too much to ask! Just some basic human respect and gratitude.
It’s not my time personally that’s at issue here. I respect that the OpenSSL people have handled this security issue, and that they’ve stepped up to maintain an important thing. I do think that position comes with a responsibility not to raise false alarms if possible.
The OpenSSL maintainers are, like us, members of a community in which we all contribute what we can. I don’t think that gratitude and criticism are mutually exclusive, but I am sorry if I seemed too complainy, and I appreciate the people in question were probably operating under no small amount of pressure.
People are very quick to assign new responsibilities to people who give them free stuff. Time and again I find this pretty incredible. Like, here are some people making a public good on their own dime. And users feel so incredibly comfortable jumping in with critiques. They didn’t do this, they should have done it like that. Pretty easy to sit back, do nothing, and complain about others’ hard work.
I heard third hand that the downgrade was done a couple hours before the release. Wouldn’t be that useful if it is indeed the case.
Well, a lot of corporate networks have internal trust chains and suites and often you have to skip validation to get past this step.
Still not Internet crippling though.
What about renting physical servers? Does anyone of this size do that anymore? Is it any cheaper than renting servers in AWS?
The thing I don’t like is thinking about disk hardware and drivers and that sort of thing. I kind of like the VM abstraction, but without VMs maybe :)
Of course some people want control over the disk, especially for databases …
You can certainly rent bare metal. This is most of OVH and Hetzners business. Both of them do it really cheaply. Though Hetzner only does cloud servers in the US, their bare metal stuff is in Europe only.
I find OVHs stuff to be pretty janky, so I’d hesitate to do much scale there. Equinix Metal is a pretty expensive, but probably quality option (I haven’t tried it).
There still exists businesses everywhere on the spectrum from an empty room to put your servers in to “here is a server with an OS on it.”
And even with your VM abstraction, someone has to worry about the disks and drivers and stuff. The decision is if you want to pay someone to do that for you, or do it yourself.
For the last couple of years rented a single bare metal server from OVH and installed FreeBSD on it and just used it for hobby stuff. I used zfs for the filesystem and volume management and bhyve for VMs. I ran several VMs on it, various Linux distributions and OpenBSD. It worked great, very few issues. The server cost me about $60/month.
But I eventually gave it up and just moved all that stuff to a combination of Hetzner cloud and Vulr because I didn’t want to deal with the maintenance.
In my experience it is cheaper than renting from AWS once you need more than a certain threshold. That threshold has stuck, rather consistently, at “about half a rack” from where I’ve observed it over the past 10 years or so.
OK interesting …
So isn’t the problem is that for whatever reason, the market doesn’t have consistent pricing for physical servers? It might be LESS than AWS, but I think it’s also sometimes “call us and we’ll negotiate a price” ?
I can see how that would turn off customers
Looking at a provider that kelp mentioned, they do have pricing for on demand: https://metal.equinix.com/product/on-demand/
And then it switches to “call us”: https://metal.equinix.com/product/reserved/
I’m probably not their customer, but that sorta annoys me …
Yeah basically a lot of stuff in the datacenter and datacenter server/networking equipment space is “call us” for pricing.
The next step past Equinix metal is to buy colocation space and put your own servers in it.
And the discounts get stupid steep as your spend gets larger. Like I’ve seen 60-70% off list price for Juniper networking equipment. But this is when you’re spending millions of dollars a year.
When you’re buying a rack at a time from HPE or Dell at $250K - $500K/rack (numbers from when I was doing this 5+ years ago) you can get them to knock off 20-40% or something.
It can be pretty annoying because you have to go through this whole negotiation and you’ll greatly overpay unless you have experience or know people with experience to know what an actual fair price is.
At enough scale you have a whole procurement team (I’ve hired and built one of these teams before) who’s whole job is to negotiate with your vendors to get the best prices over the long term.
But if you’re doing much smaller scale, you can often get pretty good deals on one off servers from ProVantage or TigerDirect, but the prices jump around a LOT. It’s kind of like buying a Thinkpad direct from Lenovo where they are constantly having sales, but if you don’t hit the right sale, you’ll greatly overpay.
Overall price transparency is not there.
But this whole enterprise discount thing also exists with all the big cloud providers. Though you get into that at the $10s of millions per year in spend. With AWS you can negotiate a discount across almost all their products, and deeper discounts on some products. I’ve seen up to 30% off OnDemand EC2 instances. Other things like EBS they really won’t discount at all. I think they operate EBS basically at cost. And to get discounts on S3 you have to be storing many many PB.
AWS definitely has an interesting model that I’ve observed from both sides. In the small, they seem to like to funnel you into their EDP program that gives you a flat percentage off in exchange for an annual spend commitment. IME they like larger multi-year commitments as well, so you’ll get a better discount if you spend $6m/3 years than if you do three individual EDPs for $2m. But even then, they’ll start talking about enterprise discounts when you are willing to commit around $500k of spend, just don’t expect a big percentage ;)
When I once worked for a company with a very large AWS cloud spend - think “enough to buy time during Andy Jassy’s big keynote” - EDPs stopped being flat and became much more customized. I remember deep discounts to bandwidth, which makes sense because that’s so high margin for them.
This is a key bit that people don’t realize. When I worked for a large ISP and was helping spec a brand new deployment to run OpenStack/Kubernetes I was negotiating list price from $$MM down to $MM. Mostly by putting out requests for bids to the various entities that sell the gear (knowing they won’t all provide the same exact specs/CPU’s/hard drives), then comparing and contrasting, taking the cheapest 4 and making them compete for business.
But its a lot of time and effort up front, and there has to be a ton of money handed over upfront. With the cloud that money is spent over time, rather than front loading it.
I share your annoyance.
I think the threshold has been pretty consistent if you think of it in terms of what percentage of a rack they need to sell… people who rent servers out by the “unit” drop below the AWS prices once you occupy around half a rack. And yes, I’ve had to call them to get that pricing.
It’s a little annoying to have to call.
And furthermore, things can be cheaper at different points in a hardware cycle, so it’s a moving target.
I think some of it is down to people who peddle VMs being able to charge per “compute unit” but people who peddle servers (or fractions of servers) not being able to go quite that granular.
If you rent in “server” units, you need to be prepared to constantly renegotiate.
There are certainly businesses which are built on the idea that they are value-added datacenters, where the value is typically hardware leasing and maintenance, fast replacement of same, networking, and perhaps some managed services (NTP, DHCP, DNS, VLANs, firewalls, load balancers, proxies…)
“Single VM on a physical host” is a thing I’ve seen (for similar reasons you mention: a common/standard abstraction), not sure how often it’s used at this sort of scale though.
I think you trade one thing for another. You have to think about other things. If you want you could also run your own VMs of course, but honestly, you just have an OS there and that’s it and if you want to not think about storage you can just run MinIO or SeaweedFS, etc. on some big storage server and add as you go. And if you rent a dedicated server and your server really happens to have disk failure (which is usually in a raid anyways) you just have it replaced and use your failover machine, like you’d use your failover if your compute instance starts to have issues.
It’s not like AWS and others don’t have errors, it’s just that you see them differently and sometimes Amazon notices before you and they just start up that VM somewhere else (that works automated as well if you run your own VMs. That’s widely implemented technology and “old”). I see all of this as server errors, whether physical rented machine or virtual machine. In both situations I’d open some form of ticket and have the failover handle it. It’s not like cloud instances are magically immune and that stuff often breaks the abstraction as well. They might detect it, there however also is just a certain percentage of VMs/instances becoming unavailable or worse becoming half-unavailable, clearly doing something still despite it having been replaced and in my opinion that is a lot more annoying then knowing of any hardware thing, because with hardware issues you know how to react, with instances doing strange things it can be a very hard to verify anything at that layer of abstraction and eventually you will be passed along through AWS support. If you are lucky of course it’s enough to just replace it, but that is pretty much an option with hardware as well and dedicated hosting providers certainly go that route of automating all these things and are pretty much there. There’s hardware/bare metal clouds but to be fair, they are close, but still lag behind. Here I think in terms of infrastructure as code. I think slowly that’s coming to physical machines as well with the “bare metal cloud” topic. It just wasn’t the focus so much. I really hope that hosters keep pushing that and customers use and demand it. AWS datacenters becoming pretty much equivalent to “the internet” is scary.
But it’s certainly not in an area where if you use compute instances (rather than something way even higher level, in the realms of Heroku, Fly.io, etc.) that you created yourself it makes a huge difference. Same problem, different level of abstraction. Probably slightly more noticed on physical machines, because of the mentioned automated VM failover, that only works though in specific cases. In either case you will need someone that knows how the server works, be it virtual or physical.
What an excellent resource. I’ve been using Unbound for advertisement blocking for a while but have not been happy with the blocklist project lists I’ve seen recommended – I’d found them to be way overbroad.
One thing I worried about was this warning in unbound.conf(5)
While I suspect the increased reporting will only help me, I’m worried about this warning. Anyone have experience here?
I’m absolutely not convinced by the Google Earth argument:
As long as that server isn’t compromised, or a rogue DNS server doesn’t direct clients to a host controller by attackers, or… PDF viewers also nominally only “take input from the user”, but I will run out of fingers if I try to count arbitrary code execution vulnerabilities in PDF clients.
Anything that can take arbitrary input must never trust that input blindly.
Note that the author worked in Google Earth team, and is speaking from experience.
I agree you should not trust input blindly, but it is a spectrum, isn’t it? I hope we can agree that memory safety is less important in Google Earth compared to Google Chrome. That’s all the author is saying.
I think by “take input from the user” he meant GUI events like click/drag/keystroke, which we can agree are pretty safe. (Mostly. I just remembered that “hey, type this short gibberish string into your terminal” meme that triggers a fork bomb…)
“Open this random PDF file” is technically a user event, but that event comes with a big payload of untrusted 3rd party data, so yeah, vulns galore.
I feel like Earth is a bad example, then. Google Earth lets you read/write your GIS data as a KMZ file. KMZ files are zipped(!) XML(!!) documents – that’s quite a bit of surface area for a malicious payload to work with.
Keep in mind Earth is sandboxed, since it runs on the web (and Android and iOS) so theres just not much damage that can be done by a malicious KMZ.
😬 At least there probably aren’t too many people using that feature. Unless they start getting urgent emails from “the IT department” telling them “you need to update your GIS data in Google Earth right away!!! Download this .kmz file…”
But some of the people who do use that are very juicy targets who make heavy use of features like that, like people mapping human rights abuses or wars. It’s not like this software is just a toy that doesn’t see serious use.
Earth passed some pretty intense security reviews before it was launched and before every major feature release, and those security teams know their stuff.
Likely a big help was how well sandboxing works in WebAssembly, Android, and iOS.
I would take a step back and ask why you are making the code open source in the first place. If you want just to share your work with others, then pick whatever you want. As long as you own all of the intellectual property related to the code, you can pick whatever license you desire.
However, if you want your project to be adopted within a corporate environment, you can’t expect things outside their standard set to get a lot of traction. That set was picked by lawyers to reduce the risk of the company having future issues with clean intellectual property rights to their product. Even if it was adopted by a company before they were big enough to have lawyers who cared, one day they will grow, get acquired, IPO, and there will be a team of people running license checks for stuff outside of the approved set. That is especially true for relatively unknown licenses like the one in this case. At that point, they’re likely going to stop engineering work to replace the components affected, too, once again, reduce risk.
Here is a hypothetical. A company adopts this component with the license as it was; they get acquired by a large, multinational public company. There is not a lawyer that would read this license and agree to run down every aspect of this license and ensure they’re complying with it. Some are easy, but many are vague enough to be a pain. So instead, they tell engineering to yeet it from the product.
Given all of that, to answer your prompt, you don’t. Companies are not taking a risk on small open-source components. If you want to get the Hippocratic License added to the set of approved licenses, it is a Sisyphean effort. The only way I see it happening is if your project gets to the level of something like Kubernetes or Linux, which (in a catch) often doesn’t happen without corporate support.
why open-source it? clearly, to provide some benefit. it’s a useful library.
i personally don’t care a great deal about adoption; what i do care about is “good use”. i personally don’t want to support the military or fossil fuel companies, say. just like i wouldn’t work at those companies.
i’m curious to gauge peoples views about expressing such sentiments via licenses. it seems like the hippocratic license - https://firstdonoharm.dev/ - is a very clear approach to do this; yet it seems to be met with quite some anxiety by people who think tech should somehow be “neutral”. it’s long been shown that neutrality only rewards the privileged; to make social change one needs to step out, somehow.
so my question is, as a tech community at large, do we just completely give up on licenses? (aside from the standard few?) or is there some room to innovative; some way to create social change for ourselves, our users, and the broader community? and if so, what is that mechanism?
I’ll ask it a different way. In an ideal world, would a company change its policies to adopt your open source software? If you want to change corporate governance, I don’t think you do it with progressive open source licenses. No engineering leader is going to go to a board and ask them to change broad policy so they can use an open source library.
and let me ask you in a different way - what would make them change?
IMO, probably only government regulation and popular opinion.
A plurality of US states – Delaware (the important one for corporate governance!) included – allow corporations to incorporate or reincorporate as a public benefit corporation. It’s conceivable that a corporation could be subject to enough pressure by its employees and shareholders that it would reincorporate as a B corporation.
But while I think a niche could exist in B corporations for software licensed under the Hippocratic license & similar, it’s important to not mix cause & effect: your Hippocratic licensed software may be eligible for selection by a company because they chose to become a B corp, but it strikes me as exceptionally unlikely that a company will ever become a B corp to use your Hippocratic licensed software.
how is B-corp and the license even related?
i.e. we’re just taking about a simple license here, where the terms are of course only enforceable through some (hypothetical) law suit; i..e the license really just expresses some notion of personal preferences enforceable only if i feel like suing random companies that use it.
maybe one thing i could point out is the difference between a code of conduct and a license. we all feel (somewhat?) comfortable with a code of conduct expressing behaviour wanted in our spaces; why not licenses for those same desires?
Corporate governance seems like the thing being discussed here. You hope to impact governance through clauses in a license. However, governance is not limited to the time when you decide to sue some companies. Companies are bound to various agreements which require them to make some attempt to mitigate risk so that they can achieve the outcomes that the owners desire. The result is that they pick and choose which risks they want to take on by limiting the number of licenses they support and the scope of these licenses.
Regular corporations (and, I suspect B-corps too) are unlikely to want to increase the number of risks they are dealing with by using software with the Hippocratic license. We already know that many companies rule out GPL and derivative licenses entirely just to limit their risk. Some will pick and choose, but only when they have resource to review and fit it into their business.
Above I used terms like “various agreements” because I don’t have the time to write in the level of the detail I’d like to. Agreements come in many forms and we care most about the explicit ones which are written like contracts. Some agreements are more implicit and while still important, I’m ignoring these to simplify. Agreements include but aren’t limited to:
For your license to succeed, you need to navigate all of these agreements. A license like MIT is relatively compatible because it’s limited in scope.
i see
i mean, suppose you are a regular developer living your life, and you feel like sharing code. clearly, i don’t want to engage at the level you mention with anyone who uses the code.
licenses seem like a reasonable way, no? or no. would you suggest there is no way? we should just give up and MIT everything?
There is no way to achieve what you desire to any great extent with your approach. The trade-offs are for you to decide.
I would posit that most people don’t want to have relationships based on the requirements of the license you put forth. If you want to define your relationships and engagement through that license for your code, or companies you run, then that’s 100% fine. Many types of small communities can be sustained with hard work.
When you go in that direction don’t expect other people to reciprocate in various ways that they can in the open source world through code use, testing, bug reporting, doc writing, etc. If you use MIT then you’ll open the door to a lot more collaboration and usage. For many people who have a livelihood depending on open source, this is the only approach. When your livelihood doesn’t depend on open source it’s easier to pick and choose licenses, but even then the decision can limit who will engage with you.
You’ve forgotten one more potential situation: you want other open source projects and people to be able to use it, but don’t care at all about corporate usage, or even want to discourage it.
In such situations, licenses like the unlicense, AGPL, Hippocratic license, etc can be useful.
I bucket that under the first point of share your work with others.
I’d be curious if the team thinks this gives a positive or negative experience to users.
One of the few unquestionably good experiences I’ve had working with a large Go project is that
gofmtprecludes all arguments about style. rustfmt has knobs for days, and I wonder how much turning they get.The bigger difference between rustfmt and gofmt are not the knobs, but the fact that gofmt generally leaves new line placement to the programmer, while rustfmt completely disregards existing layout of code.
That’s one of the most frustrating experiences writing go code for me. I’m used to automatic line wrapping.
OTOH to me rustfmt’s “canonicalization” is a destructive force. There are many issues with it. One example: I strive to have semantically meaningful split of code into happy paths and error handling:
To me this gives clarity which code is part of the core algorithm doing the work, and which part is the necessary noise handling failures.
But with rustfmt this is pointlessly blended together in to a spaghetti with weirdo layout like this:
or turned into a vertical sprawl without any logical grouping:
rustfmt doesn’t understand the code, only tokens. So it destroys human input in the name of primitive heuristics devoid of common sense.
I want to emphasise that rustfmt has a major design flaw, and it’s not merely “I don’t like tabs/spaces” nonsense. The bikeshed discussions about formatting have burned people so badly, that it has become a taboo to speak about code formatting at all, and we can’t even criticise formatting tools when they do a poor job.
As a user, I like that rustfmt has some knobs, it lets me adjust the coding style to use within my own project, so it’s internally consistent.
What matters to me, as a developer, is that the coding style within a project is consistent. I don’t care about different projects using different styles, as long as those are also internally consistent, and have a way to reformat to that style, it’s all good. I care little about having One True Style. I care about having a consistent style within a project. Rustfmt lets me have the consistent internal style, without forcing the exact same style on everyone.
It’s worth saying that Rust is targeting C and C++ programmers more than Go is, and at least in my experience it’s those programmers who have the most idiosyncratic and strongest opinions of what constitutes “good style.” “My way or the highway” may not work well with many of them.
Go was originally intended as an alternative to C & C++, fwiw.
I always like to give a concrete example that’s happened to me and does not involve finance :)
National drug codes (NDCs) identify finished drug products in the USA.
50580-600-02, as an example, uniquely identifies a ten-tablet blister pack of paracetamol manufactured by Johnson and Johnson.Today. But because of the way the format is defined, there are a limited quantity of valid identifiers. As a result, the FDA will periodically expire old NDCs and reassign them. So five years ago, that NDC might’ve referred to another drug entirely. Or, the record you stored yesterday might’ve pointed to the wrong drug because a NDC reassignment happened that the pharmacy processed before you could.
My work does this at scale today!
In general, that’s pretty much how it works. We have a long-lived root in a vault that signs multiple leaf certs. The current leaf is in an environment that has some automation to sign client certificates. I think we run a OCSP responder to handle revocations, not positive, but we terminate most of those connections at a single box so it’s not a huge deal either way.
The biggest downside is teaching people on the other end how to use it. The tooling/dev experience around mTLS/client-certificates is not great. We offer both flavors – mTLS & API keys – and are finding API keys are the most preferred solution.
There’s so many slippery slopes in this comment section, I don’t know how I didn’t slip myself and cracked my skull in the floor.
“But If we outlaw nazism, where will it stop” At nazism. That’s where it stops. It’s what happened in Germany, Brazil. Probably several other places I’m too lazy to research right now.
“It’s a lynch mob, it’s the trolls now, who’s next” No one. Certainly not you, I.T. person more bland than mashed potatoes (this might be more a reflection of myself than the community but whatever). All the free speech absolutists get all worried anytime something like this happens, and yet, nothing apocalyptical follows. Trump was banned from twitter, but Bolsonaro and other 1000 lying politicians are still there.
So, maybe it’s a good thing that companies sometimes fold to public pressure, and maybe it doesn’t spell the end of free speech and democracy. In fact, maybe it’s not happening often enough.
As a leftist I find the slippery slope argument hilarious. “Oh no, you’re saying they’ll come for my speech next? I’ll be the victim of ‘mob violence?’” That’s been the state of affairs for centuries.
I think it’s also informative that the only people actually moved by the argument are believers in some naive rules based liberal world. Reactionaries and fascists gleefully rejoice in the slippery slope because they know when the chips are down they’ll be the ones pushing people down it.
❤️
To thine own self be true, Polonius. To thine own self be true.
A bit of humanity goes a long way, but sometimes that is hard to maintain focus on when we’re arguing about whether bomb threats are a valid form of protected free speech, and what responsibilities companies have for responding to their services being used for the same.