It would be interesting to see an Azure comparison. A lot more Azure customers are doing lift and shift from on-prem and so Azure has a lot more focus on IaaS (AWS is much better at attracting the ‘cloud native’ PaaS workloads). This makes it a bit easier to do direct comparisons (e.g. direct attached SSDs are a thing), though I’d still expect it to be expensive.
That said, there are two caveats:
Big customers at cloud providers rarely pay full price. You could probably get a discount that makes AWS a lot more competitive.
The entire point of the cloud is the ability to scale down. On-prem hardware scales up very easily: but more and you now have more capacity. It doesn’t scale down at all easily, you have to decommission the hardware and sell it (at a loss) or pay for recycling. If you are fully loading the machines and adding more as your workload gradually grows, a cloud deployment is likely to be a terrible idea. If you’re buying those massive multi core machines with 2 TiB of RAM and not using 80+% of them all of the time then your actual cloud usage might be lower.
I am not sure how often you need to scale down by that much in a year after buying your own hardware.
The immediate 1/3 cut would be to move servers to storage, so that you avoid paying rent, power and connectivity. But usually you would just purchase less servers for the next iteration and scale down organically.
The problem is, you’ve already paid for the servers. That’s sunk cost and you’re never getting it back. Any you need to buy enough servers to handle your peak workloads.
AWS began because of Cyber Monday. Amazon had a couple of days a year when their load was an order of magnitude greater than their average. They had to buy ten times as many machines as they needed 363 days of the year. They built their infrastructure so that they could scale up and down dynamically and that let them first rent out the unused capacity and then grow the capacity so that AWS’s capacity (to cover spikes for everyone else) was far more than Amazon needed for their shop.
If your workload completely uses all of your servers, then you won’t see benefits (cloud providers probably get better economies of scale, but their profit margins are pretty huge at the moment and so you don’t see any benefit from that). The value of the cloud is that you can handle peaks in demand but then scale back down to what you’re using a few hours, minutes, or even seconds after the peak ends.
By this logic, wouldn’t it make more sense to cover your typical performance required with owned HW and only scale out to a public cloud when needed?
But I don’t think this idea really holds much water. If the software is being developed, you are typically increasing performance demands. The demands also tend to oscillate. An underoptimized query here and there and so on. So you are actually hitting the limits of your own hardware from time to time even it’s not a peak season.
I believe that exactly this hitting of limits combined with a slightly longer latency to hardware acquisition acts as a negative feedback loop. Developers control performance, but not the rate of computing resource acquisition. Thus they pay attention to performance a little more. They get exposed to iostat and so on.
with owned HW and only scale out to a public cloud when needed
This means a lot of work and extra latency and a likely redesign and managing parallel deployments to different environments and decisions about what to overspec/burst and… Sure it’s possible, but you need to have very specific requirements and system for that to make sense.
By this logic, wouldn’t it make more sense to cover your typical performance required with owned HW and only scale out to a public cloud when needed?
That’s the strategy that a few very successful companies that I’m aware of do. They have their own hardware for normal demand and scale out to the cloud for peaks.
An extra caveat - they’re comparing this in terms of “we shift exactly what we’re doing right now”. This is not what you’d do with a migration like that though. For example, it’s unlikely they actually use all that storage - could most of it go to a cold s3 storage? Do they need to handle as much traffic if they can pregenerate data into s3 or cache aggressively in cloudfront? How much of their compute is there only for the peak traffic and could go away otherwise?
The “same thing in two very different environments” comparison is not great.
The entire post is the equivalent of trying to make a politcal case to a leapord nor why it should not eat your face. No one who reads this post is controlling hiring at big tech co’s, nor are they seeing the world in any comparable terms to this post, people who are doing hiring at not-big-tech-co’s aren’t doing the things in this post.
It’s just fancy blog spam with a dusting of self satisifed smugness
Why are you sure no BigCo hiring managers will read this? I mean, it’s a random blog post so the odds of any given person reading it are low, but there are a lot of hiring managers out there…
RFC 1178 [1] echoes the blog post on the ephemerality of responsibilities:
Don’t choose a name after a project unique to that machine.
A manufacturing project had named a machine “shop” since it was going to be used to control a number of
machines on a shop floor. […] It is simply impossible to choose generic names that remain appropriate for very long.
While computers aren’t quite analogous to people, their names are. Nobody expects to learn much about a
person by their name. […] In reality, names are just arbitrary tags.
and has suggestions for choosing arbitrary but memorable names, by using themes:
use colors, such as “red” and “blue”. Personality can be injected by choices such as “aqua” and “crimson”.
Nobody expects to learn much about a person by their name. […] In reality, names are just arbitrary tags.
Filing under “Falsehoods (some) programmers believe about names”
Names have significant meaning at many different levels.
more of passing thought that than a nuanced response, but I find it disheartening that pieces like this completely fail to take into account or even consider the underlying power structures (basically politics at various levels) that have bothed massively influenced what “cloud” we have today, where it can realistically go tomorrow, and what the implications of any predicited routes might be, (e.g is this further centralising control, how is the balance of power being changed between various cloud providers, device manufacturers, network owners, etc etc)
as maybe, but I would hope that even in the most boring way possible (i.e who is the big fish in my pond) at least some corporate entities are aware of the wider political implications of where cloud computing might go and what it might do to them. Just chasing efficiency by itself is not a viable long term stratergy
oh wow… a website decided to use a single 1.3mb JSON file (thats compresses to 187KB on the wire, and is only fetched on first use) rather than spin up entire webservice to for a single column lookup. God only knows what the internal processes around adding things to the public facing aspect of a consumer website are for barclays, but this there’s plenty of likely scenarios where what they’ve done is perfectly fine, avoids complexity and is a reasonable tradeoff.
Just to drive this nail in further, at no point does the article attempt to flesh out what a sensible web service would be, or what it would entail in development and ongoing operational costs, or even consider who might asking for this functionality and what tools/teams they might have available to them
Blog posts like these demonstrate the complete detachement from “getting something useful done” that afflicts fat oo many “celeb” developers
Yeah, using a single JSON blob for this is totally appropriate. It means you can host the site on S3 or whatever. The author says “why not use a regex” but that’s a really fragile solution that assumes the numbers will follow a definite pattern. It sounds like they botched the client side lookup cleaning though. Oh well. There are plenty of other things to complain about in life.
I wrote a site that tracks some data, and I realized that in ten years, we only had 4,000 entries which came out to 700KB of JSON (180KB with gzip), so I just ship the whole dataset down to the client. It’s a much better way to do it: no expensive DB queries, the JSON is always warm in the cache, subsequent data filtering on the client side is instant, etc.
I get phone calls from banks and other financial services people periodically that start by asking me to prove who I am. I always reply by saying you called me, so please prove that you are from Barclays before I say anything else. I was pleasantly surprised by my most recent call from Barclays: They are the first company to call me and actually have a procedure for doing this. The people that are authorised to cold-call me now have access to a thing that can send me a message via the mobile banking app, so they could send me something saying ‘On the phone with {person name}’ to confirm that this person actually was supposed to be talking to me. It’s not completely secure.
Anyone who can compromise the app can now impersonate Barclays, but in general someone pretending to be Barclays on the phone can do far less damage than someone who can compromise the app and is more likely to be detected, so it’s probably fine.
Agree. To me this looks like something that had to be put in place hurriedly to counter a recent spate of cold calls from people pretending to be from Barclays. Knowing a little bit about how fast banks move (for both fiduciary and cultural reasons) this setup looks typical.
I’m one of the creators of Vault. I read this back when it was posted and I’d be happy to share my thoughts. I’ll note its worth reading through to the last paragraph and into the comments, the title is a bit bait-y and the article does a better job than the title gives itself credit for.
Broadly speaking, if you’re looking at Vault to solve a specific problem X for a specific consumption type Y on a specific platform Z, then it probably is overkill (I wouldn’t say “overhyped” :)). i.e. “encrypted key/value via env vars on AWS Lambda”. “X via Y on Z.” The power of Vault is: multiple use cases, multiple consumption modes, and multiple platforms supported with a single consistent way to do access control, audit logging, operations, etc.
I can’t stress that “single consistent way to do access control, audit logging, operations, etc.” enough. Multiple security use cases dangling off that consistency is really important as soon as you hit N=2 or N=3 security use cases.
If you need say… encrypted KV and encryption-as-a-service and dynamic just-in-time credentials and a PKI system (certificate), and you need this as files and as env vars, and you need this on Kubernetes and maybe also on EC2, then Vault is – in my totally biased opinion – going to blow any other option out of the water.
That’s a somewhat complex use case but its something Vault excels at. For simpler use cases, Vault is making more and more sense as we continue to make Vault easier to use. For example, we now provide a Helm chart and official K8S integration so you can run Vault on K8S very easily. And in this mode, developers don’t even need to know Vault is there cause their secrets show up as env vars and files just like normal K8S secrets would.
Also, this article is from June 2019 and in 10 short months we’ve made a ton of progress on simplifying Vault so it gets closer to that “X via Y on Z” use case. Here are some highlights I can think of off the top of my head but there are definitely more, this is just from memory:
We have integrated storage as an option now, so you don’t need separate storage mechanisms.
Our learn guides went from basically zero to lots of content which makes it much easier to learn how to use Vault: https://learn.hashicorp.com/vault
This list seems to be based on a super Frankenstein’d, incompletely applied threat model.
There is a very real privacy concern to be had giving google access to every detail of your life. Addressing that threat does not necessitate making choices based on whether the global intelligence community can achieve access into your data — and less than skillfully applied that probably makes your overall security posture worse.
I agree that mentioning of the 5/9/14/howevermany eyes is unnecessary, and also not helpful. It’s not like if your data is stored on a server in a non-participating country that it somehow makes you more secure. All of that data still ends up traveling through the same routers on its way to you.
In a long list of ways, Google is the most secure service. For some things (i.e. privacy) they’re not ideal, but moving to other services almost certainly involves security compromises (to gain something you lose something).
Again, it all goes back to what your threat model is.
Google is only the most secure service if you are fully onboard with their business model. Their business model is privacy violating at the most fundamental level, in terms of behavioral surplus futures. Whatever your specific threat model it then becomes subject to the opacity of Google’s auction process.
which happily hands over data to the NSA when asked.
Emphasis mine.
As someone who don’t like Google anymore I still think this is still plain wrong I think and I’ll give reasons why:
Google is known to have put serious effort into countermeasures against wiretaps.
Google is known to be challenging NSA and others where possible.
and for the best reason that exist in a capitalist society: it is bad for their business if people think they happily hand over data to the NSA.
(and FWIW I guess a number of Googlers took offense to the smiley in the leaked NSA slides)
Also, for most people running their own services isn’t more secure, and can in many cases be even less secure, even against NSA. I’ll explain that as well:
Things you get for free with Google and other big cloud providers:
physical security
patching
monitoring
legal (yep, for the selfish business reasons mentioned above they actually challenge requests for data)
if I protect my house by getting the biggest strongest door out there, but the burglars turn up with a brick they throw though my window, then my “security” was useless as my threat model was way off. The concept of threat modelling is most certainly not a “meme”.
Lots of people get hacked when they self-host, because it requires quite some knowledge not everyone has and even if you do, it’s easy to make mistakes. Just self-hosting does not make anything automatically secure, and it also won’t protect you from “tne NSA”: you’ll still be obliged to follow laws etc. Besides, the distributed nature of email/SMTP makes it hard to protect from this anyway: chances are most of your emails will still be routes through a US server.
All services “read my emails” to some degree as that’s pretty much a requirement for processing them. This doesn’t necessarily say anything about security or privacy.
It’s not like your comment was especially detailed or overflowing with nuance. Short abrupt one-line comments with blanket statements tend to elicit the same kind of replies.
yeah, but what does “more secure” mean? When people say threat model, they are just talking about what “more secure” means in a certain context. It’s not exactly infosec dogma….
There is no singular axis of more/less secure
“Secure” is a vague term in this context. Giving google and their partners access to your e-mails is not a security issue, I would expect that all to be written down in their ToS and similar documents. It is bad for your privacy and anonymity, definitely.
But I suspect google would be better prepared for a 3rd party that is attempting to hack their servers and forcefully obtain your e-mails than you or any other single individual are. I think that’s also what @ec and others are referring to. Moving away from Google is definitely a good decision to get back (some of) your privacy. Security wise, it really depends on where you are moving to.
Google hands over its users’ data to the American government and through the Five Eyes agreement and similar agreements to many of the governments of the western world. That is not a ‘privacy’ issue it’s a security issue.
Running my own email is not more secure against data loss (unless you also have multi-point off-site backups, encrypted, with the crypto keys stored securely).
It’s also not more secure against email delivery failures causing you to lose business (a much bigger issue for me than google reading them).
Neither is it more secure against your abusive spouse accessing your emails (or destroying the hardware).
Finally, anytime you communicate with a gmail user, google is reading your emails anyways - so to improve your security you also need your mail client to check whether the recipients MX records resolve to a google-controlled IP range.
That’s what “irrelevant without a threat model” means.
I’ve nearly finished de-googling everything in my life. Doing it in a way that preserves the security properties I care about is very hard work.
It’s also not more secure against email delivery failures causing you to lose business
Eh, I’ve run my own email and did gmail side by side for years. I lost more legitimate emails to google’s spam filter false positives than to server down.
Remember that email is designed to be resilient against delivery failures, designed in the days of temporary dial-up connections. If a server is down, it just queues the message and tries again later. If it still doesn’t work, it notifies the sender that the message failed. Not everyone will try to contact you another way when that happens…. but surely more than people whose messages just disappeared into a spam filter.
I’ve been on Fastmail for years now. I regularly check my spam folder; in almost four years I have had one false positive.
When I briefly tried running my own, google randomly stopped accepting my mail after a little while (hence briefly).
I’m glad you have had a good experience with it; I haven’t found it as good a use of my recreational sysadmin time as other things (plex, vscode-over-http, youtube-dl automation, repo hosting etc).
Not sure what a misunderstanding of Marie Kondo has to do anything, but sure, go off on one about charlatans…
This just feels like reactionary bullshit (rather than anything revolutionary) with a total failure to understand what has changed about the world outside the computer and what people’s relationships with computers is.
Absolutely. We’re using the web for a wide variety of things that it’s not meant to do, and this required sticking a bunch of extra crap on top of it, and the people who stuck the extra crap on top weren’t any more careful about managing technical debt than the people who designed the thing in the first place.
There’s a need for secure computerized mechanisms for commerce, exchange, and identification management. Using web technologies to produce stopgap solutions has lowered the perceived urgency of supplying reliable solutions in that domain. It’s put us back twenty-five years, because (at great cost) we’ve made a lot of stuff that’s just barely good enough that it’s hard to justify throwing it all out, even when the stuff we’ve built inevitably breaks down because it’s formed around a set of ugly hacks that can’t be abstracted away.
But that’s the thing – the web you talk of is little more than ruins that people like me or you visit and try to imitate. It has become irrelevant that the web “wasn’t meant” for most of the things it is being used today, it has been revamped and pushed towards a direction where it retroactively has been made to seem as though it had been made for it, it’s history rewritten.
It’s not clean, but with the prospect of a browser engine monopoly by groups like Google, and their interests, this might get improved on for better or worse.
The point still stands that I have made elsewhere that the web has been the most fertile space for the real needs of a computer users, both “consumers” as well as marketing and advertisement firms – one can’t just forget the last two, and their crucial role in building up the modern web into something more or less workable.
If we had wanted the web to be “pretty” and “clean” I think it would have had to have grown a lot more organically, slower and controlled. Instead of waiting for AOL, yahoo and gmail to offer free email, this should have been locally organised, instead of having investors thrown into a space where the quickest win, technologies and protocols should have had time to ripen, instead of having millions of people buying magical computer boxes that get better with every update, they should have had to learn what it is they are using – having have said that, I don’t believe this would have been in any way practical. Beauty and elegance don’t seem to scale to this level.
The lack of cleanliness isn’t merely a headache for pedants & an annoyance for practitioners: it makes most tasks substantially more difficult (like 100x or 1000x developer effort), especially maintenance-related tasks, and because the foundations we’re building upon are shoddily designed with unnecessary abstraction leakages & inappropriate fixations, certain things we need to do simply can’t be done reliably. We’re plugging the holes in the boat with newspaper, and that’s the best we can do.
Beauty and elegance don’t seem to scale to this level.
Scaling is hard, and I don’t expect a large system to have zero ugly spots. But, complication tends to compound upon itself. For the first 15-20 years of the web, extremely little effort was put into trying to make new features or idioms play nice with the kinds of things that would come along later – in other words, everything was treated like a hack. Tiny differences in design make for huge differences in complexity after a couple iterations, and everything about the web was made with the focus on immediate gains, so as a result, subsequent gains come slower and slower. Had the foundational technologies been made with care, even with the kind of wild live-fast-die-young-commit-before-testing dev habits that the web generation is prone to, the collapse we are currently seeing could have been postponed by 20 or 30 years (or even avoided entirely, because without the canonization of postel’s law and “rough consensus and running code” we could have avoided ossifying ugly hacks as standards).
It confuses me that you need to switch registries to make this work. Will GitHub proxy all packages from npm? Otherwise, how would you use some from npmjs.com and others from GH in your projects?
I mean, his entire schtick is “Neo-Reaction”, he’s defended owning slaves, and the list absolutely goes on from there. I’m not sure why thats controversial. If you want to sift through his oeuvre for more tidbits on what he believes by all means, but denying that he believes in all different kinds of hierarchy and especially racial hierarchy is going to be a problem when you’re done.
If you can give a pithy explanation of what urbit is really about, you’ll be the first in my experience.
There are some cool concepts but it seems like they are melded together to create a solution to some problem which is never clearly stated, unless the problem is “there should be a digital asset akin to land in that it is impossible to create more of it,” which isn’t a problem most people, even most people posting on various programming-oriented messageboards, would find compelling.
“a digital asset akin to land in that it is impossible to create more of it” is actually a really interesting solution to the problem of making digital identities expensive enough to discourage spam, and also to the problem of funding the development of a social network before the social network has gotten large enough to be obviously valuable. Certainly not the only such solution, but a solution. I’d actually like to see other projects that have nothing to do with Urbit try out their own spins on the idea of cryptographically-verified digital land ownership.
This post almost makes zero sense… apart from “buy my product” it’s pretty much a series of unconnected dots and buzzwords masquerading as cohesive argument for… something?!
tens of millions of requests every month ~ 4 requests per second…
So what’s the problem here, that some how the internet top level .io is some god-given right to some people that were born on an island? That if you were standing around on a piece of land then that land is yours from today till the end of days and that also extends to a particular set of two-characters that somebody else who invented and implemented a network had decided would be assigned some level of relevance to said piece of land? Is it also an injustice if i somehow implemented my own domain resolution system and don’t give up ‘io’ to these islanders? Do you have a list of strings that I need to give to people so I don’t oppress them?
The amount of desperation that people have to fall for the ‘poor-innocent-uncivilized-islanders-slash-villagers-being-oppressed-by-evil-white-people’ narrative continues to boggle to mind.
how fucking stupid or wilfully racist do you have to be to not see that ‘poor-innocent-uncivilized-islanders-slash-villagers-being-oppressed-by-evil-white-people’ is quite literally the exact thing that happened here?
they’re emblematic of everything that’s wrong with western colonialism
Colonialism, you say?
The rights for selling .io domains are held by a British company called Internet Computer Bureau (ICB) [..] The .io domains each cost £60 ($102) before taxes, or twice that if you’re outside the EU.
The British government granted these rights to ICB chief Paul Kane back in the 1990s.
Don’t you think something like “everything that’s wrong with governments deciding who gets to profit from what” would have been more accurate?
All domain names are sold through some kind of state-enforced monopoly or cartel, which is also what enables the scourge that is domain squatting.
What’s colonialism got to do with this, besides that the islands were colonized by a .. well, there’s that word again.
It is a shame that this paper seems not to address the privacy implications of what information could be deduced by the external service providers (and their apps) through the use of procrastinator. and/or what private information ‘procrastinator’ would need to function as intended.
Note to self: never ever disrespect your open source projects' users like that. Either provide a stable public API or the tools to automate the migrations.
Bug fixing and API stability are orthogonal concepts, aren’t they? Unless you mean mistakes in the API itself in which case I’ll remind you that no one really suffers from the typo in “HTTP referer”. And for more serious API design mistakes the proper punishment would be to keep maintaining the old version forever in addition to the new stuff you just added.
Well, semver would disagree with you about never breaking backwards compat on API.
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
That was the exception and not the example. Torvalds just thought there were enough features to warrant a new release and 2.6.40 seemed to be pushing it.
http://thread.gmane.org/gmane.linux.kernel/1147415
Well, in my ideal world (which someone thought it’s “-1 incorrect” :-) ) major versions for major features is the rule, not the exception. The Linux kernel is a good example of (public) API stability and we should learn from it.
It would be interesting to see an Azure comparison. A lot more Azure customers are doing lift and shift from on-prem and so Azure has a lot more focus on IaaS (AWS is much better at attracting the ‘cloud native’ PaaS workloads). This makes it a bit easier to do direct comparisons (e.g. direct attached SSDs are a thing), though I’d still expect it to be expensive.
That said, there are two caveats:
I am not sure how often you need to scale down by that much in a year after buying your own hardware.
The immediate 1/3 cut would be to move servers to storage, so that you avoid paying rent, power and connectivity. But usually you would just purchase less servers for the next iteration and scale down organically.
The problem is, you’ve already paid for the servers. That’s sunk cost and you’re never getting it back. Any you need to buy enough servers to handle your peak workloads.
AWS began because of Cyber Monday. Amazon had a couple of days a year when their load was an order of magnitude greater than their average. They had to buy ten times as many machines as they needed 363 days of the year. They built their infrastructure so that they could scale up and down dynamically and that let them first rent out the unused capacity and then grow the capacity so that AWS’s capacity (to cover spikes for everyone else) was far more than Amazon needed for their shop.
If your workload completely uses all of your servers, then you won’t see benefits (cloud providers probably get better economies of scale, but their profit margins are pretty huge at the moment and so you don’t see any benefit from that). The value of the cloud is that you can handle peaks in demand but then scale back down to what you’re using a few hours, minutes, or even seconds after the peak ends.
Thats not what happened at all!
So, what happened?
By this logic, wouldn’t it make more sense to cover your typical performance required with owned HW and only scale out to a public cloud when needed?
But I don’t think this idea really holds much water. If the software is being developed, you are typically increasing performance demands. The demands also tend to oscillate. An underoptimized query here and there and so on. So you are actually hitting the limits of your own hardware from time to time even it’s not a peak season.
I believe that exactly this hitting of limits combined with a slightly longer latency to hardware acquisition acts as a negative feedback loop. Developers control performance, but not the rate of computing resource acquisition. Thus they pay attention to performance a little more. They get exposed to iostat and so on.
This means a lot of work and extra latency and a likely redesign and managing parallel deployments to different environments and decisions about what to overspec/burst and… Sure it’s possible, but you need to have very specific requirements and system for that to make sense.
That’s the strategy that a few very successful companies that I’m aware of do. They have their own hardware for normal demand and scale out to the cloud for peaks.
An extra caveat - they’re comparing this in terms of “we shift exactly what we’re doing right now”. This is not what you’d do with a migration like that though. For example, it’s unlikely they actually use all that storage - could most of it go to a cold s3 storage? Do they need to handle as much traffic if they can pregenerate data into s3 or cache aggressively in cloudfront? How much of their compute is there only for the peak traffic and could go away otherwise?
The “same thing in two very different environments” comparison is not great.
The entire post is the equivalent of trying to make a politcal case to a leapord nor why it should not eat your face. No one who reads this post is controlling hiring at big tech co’s, nor are they seeing the world in any comparable terms to this post, people who are doing hiring at not-big-tech-co’s aren’t doing the things in this post.
It’s just fancy blog spam with a dusting of self satisifed smugness
Why are you sure no BigCo hiring managers will read this? I mean, it’s a random blog post so the odds of any given person reading it are low, but there are a lot of hiring managers out there…
RFC 1178 [1] echoes the blog post on the ephemerality of responsibilities:
and has suggestions for choosing arbitrary but memorable names, by using themes:
[1] Choosing a name for your computer: https://www.rfc-editor.org/rfc/rfc1178
Names have significant meaning at many different levels.
Worth mentioning https://without.boats/blog/signing-commits-without-gpg/ and https://github.com/withoutboats/bpb
In short - signed commits with gpg, using a rust binary
more of passing thought that than a nuanced response, but I find it disheartening that pieces like this completely fail to take into account or even consider the underlying power structures (basically politics at various levels) that have bothed massively influenced what “cloud” we have today, where it can realistically go tomorrow, and what the implications of any predicited routes might be, (e.g is this further centralising control, how is the balance of power being changed between various cloud providers, device manufacturers, network owners, etc etc)
It’s not so much conscious political decisions as blind corporate seeking of efficiencies.
as maybe, but I would hope that even in the most boring way possible (i.e who is the big fish in my pond) at least some corporate entities are aware of the wider political implications of where cloud computing might go and what it might do to them. Just chasing efficiency by itself is not a viable long term stratergy
Yes, but you can also substitute “externalities” for “efficiencies” most of the time.
oh wow… a website decided to use a single 1.3mb JSON file (thats compresses to 187KB on the wire, and is only fetched on first use) rather than spin up entire webservice to for a single column lookup. God only knows what the internal processes around adding things to the public facing aspect of a consumer website are for barclays, but this there’s plenty of likely scenarios where what they’ve done is perfectly fine, avoids complexity and is a reasonable tradeoff.
Just to drive this nail in further, at no point does the article attempt to flesh out what a sensible web service would be, or what it would entail in development and ongoing operational costs, or even consider who might asking for this functionality and what tools/teams they might have available to them
Blog posts like these demonstrate the complete detachement from “getting something useful done” that afflicts fat oo many “celeb” developers
Yeah, using a single JSON blob for this is totally appropriate. It means you can host the site on S3 or whatever. The author says “why not use a regex” but that’s a really fragile solution that assumes the numbers will follow a definite pattern. It sounds like they botched the client side lookup cleaning though. Oh well. There are plenty of other things to complain about in life.
I wrote a site that tracks some data, and I realized that in ten years, we only had 4,000 entries which came out to 700KB of JSON (180KB with gzip), so I just ship the whole dataset down to the client. It’s a much better way to do it: no expensive DB queries, the JSON is always warm in the cache, subsequent data filtering on the client side is instant, etc.
I get phone calls from banks and other financial services people periodically that start by asking me to prove who I am. I always reply by saying you called me, so please prove that you are from Barclays before I say anything else. I was pleasantly surprised by my most recent call from Barclays: They are the first company to call me and actually have a procedure for doing this. The people that are authorised to cold-call me now have access to a thing that can send me a message via the mobile banking app, so they could send me something saying ‘On the phone with {person name}’ to confirm that this person actually was supposed to be talking to me. It’s not completely secure. Anyone who can compromise the app can now impersonate Barclays, but in general someone pretending to be Barclays on the phone can do far less damage than someone who can compromise the app and is more likely to be detected, so it’s probably fine.
Agree. To me this looks like something that had to be put in place hurriedly to counter a recent spate of cold calls from people pretending to be from Barclays. Knowing a little bit about how fast banks move (for both fiduciary and cultural reasons) this setup looks typical.
In the exact words of Mitchell (vault cofounder) over on the orange hellscape https://news.ycombinator.com/item?id=23032499
This list seems to be based on a super Frankenstein’d, incompletely applied threat model.
There is a very real privacy concern to be had giving google access to every detail of your life. Addressing that threat does not necessitate making choices based on whether the global intelligence community can achieve access into your data — and less than skillfully applied that probably makes your overall security posture worse.
I agree that mentioning of the 5/9/14/howevermany eyes is unnecessary, and also not helpful. It’s not like if your data is stored on a server in a non-participating country that it somehow makes you more secure. All of that data still ends up traveling through the same routers on its way to you.
If you’re going to put a whole lot of effort into switching away from Google, you might as well do it properly and move to actually secure services.
In a long list of ways, Google is the most secure service. For some things (i.e. privacy) they’re not ideal, but moving to other services almost certainly involves security compromises (to gain something you lose something).
Again, it all goes back to what your threat model is.
Google is only the most secure service if you are fully onboard with their business model. Their business model is privacy violating at the most fundamental level, in terms of behavioral surplus futures. Whatever your specific threat model it then becomes subject to the opacity of Google’s auction process.
Emphasis mine.
As someone who don’t like Google anymore I still think this is still plain wrong I think and I’ll give reasons why:
Google is known to have put serious effort into countermeasures against wiretaps.
Google is known to be challenging NSA and others where possible.
and for the best reason that exist in a capitalist society: it is bad for their business if people think they happily hand over data to the NSA.
(and FWIW I guess a number of Googlers took offense to the smiley in the leaked NSA slides)
Also, for most people running their own services isn’t more secure, and can in many cases be even less secure, even against NSA. I’ll explain that as well:
Things you get for free with Google and other big cloud providers:
“Security” is not an absolute value; it is meaningless without a threat model.
You have demonstrated that you are well out of your league here. Quiet down, listen and learn.
Wow, that seems an incredibly uncalled for level of incivility, even for lobsters.
Yeah, that was definitely going off the deep end.
There’s an appropriate level of criticism here, and this ain’t it.
/u/friendly - I apologise unreservedly for that comment.
Thankfully this attitude is not common here.
if I protect my house by getting the biggest strongest door out there, but the burglars turn up with a brick they throw though my window, then my “security” was useless as my threat model was way off. The concept of threat modelling is most certainly not a “meme”.
Lots of people get hacked when they self-host, because it requires quite some knowledge not everyone has and even if you do, it’s easy to make mistakes. Just self-hosting does not make anything automatically secure, and it also won’t protect you from “tne NSA”: you’ll still be obliged to follow laws etc. Besides, the distributed nature of email/SMTP makes it hard to protect from this anyway: chances are most of your emails will still be routes through a US server.
All services “read my emails” to some degree as that’s pretty much a requirement for processing them. This doesn’t necessarily say anything about security or privacy.
It’s not like your comment was especially detailed or overflowing with nuance. Short abrupt one-line comments with blanket statements tend to elicit the same kind of replies.
yeah, but what does “more secure” mean? When people say threat model, they are just talking about what “more secure” means in a certain context. It’s not exactly infosec dogma…. There is no singular axis of more/less secure
“Secure” is a vague term in this context. Giving google and their partners access to your e-mails is not a security issue, I would expect that all to be written down in their ToS and similar documents. It is bad for your privacy and anonymity, definitely.
But I suspect google would be better prepared for a 3rd party that is attempting to hack their servers and forcefully obtain your e-mails than you or any other single individual are. I think that’s also what @ec and others are referring to. Moving away from Google is definitely a good decision to get back (some of) your privacy. Security wise, it really depends on where you are moving to.
Google hands over its users’ data to the American government and through the Five Eyes agreement and similar agreements to many of the governments of the western world. That is not a ‘privacy’ issue it’s a security issue.
Running my own email is not more secure against data loss (unless you also have multi-point off-site backups, encrypted, with the crypto keys stored securely).
It’s also not more secure against email delivery failures causing you to lose business (a much bigger issue for me than google reading them).
Neither is it more secure against your abusive spouse accessing your emails (or destroying the hardware).
Finally, anytime you communicate with a gmail user, google is reading your emails anyways - so to improve your security you also need your mail client to check whether the recipients MX records resolve to a google-controlled IP range.
That’s what “irrelevant without a threat model” means.
I’ve nearly finished de-googling everything in my life. Doing it in a way that preserves the security properties I care about is very hard work.
Eh, I’ve run my own email and did gmail side by side for years. I lost more legitimate emails to google’s spam filter false positives than to server down.
Remember that email is designed to be resilient against delivery failures, designed in the days of temporary dial-up connections. If a server is down, it just queues the message and tries again later. If it still doesn’t work, it notifies the sender that the message failed. Not everyone will try to contact you another way when that happens…. but surely more than people whose messages just disappeared into a spam filter.
I’ve been on Fastmail for years now. I regularly check my spam folder; in almost four years I have had one false positive.
When I briefly tried running my own, google randomly stopped accepting my mail after a little while (hence briefly).
I’m glad you have had a good experience with it; I haven’t found it as good a use of my recreational sysadmin time as other things (plex, vscode-over-http, youtube-dl automation, repo hosting etc).
Not sure what a misunderstanding of Marie Kondo has to do anything, but sure, go off on one about charlatans… This just feels like reactionary bullshit (rather than anything revolutionary) with a total failure to understand what has changed about the world outside the computer and what people’s relationships with computers is.
What, in your opinion, has changed about people’s relationship to computers since 1990 that justifies the complexity of, say, web browsers?
The fact that on the web, you can buy almost any product you want, transfer your money around, plan and book your next vacation, (in some countries) you can register a car, your children and even yourself, or in other words, the “world wide web” has become far more integral and it’s requirements have grown from becoming a markup protocol to one of the main communication media of our global society.
I’m not sure if you would say it “justifies” it, but I do think that it explains it.
Absolutely. We’re using the web for a wide variety of things that it’s not meant to do, and this required sticking a bunch of extra crap on top of it, and the people who stuck the extra crap on top weren’t any more careful about managing technical debt than the people who designed the thing in the first place.
There’s a need for secure computerized mechanisms for commerce, exchange, and identification management. Using web technologies to produce stopgap solutions has lowered the perceived urgency of supplying reliable solutions in that domain. It’s put us back twenty-five years, because (at great cost) we’ve made a lot of stuff that’s just barely good enough that it’s hard to justify throwing it all out, even when the stuff we’ve built inevitably breaks down because it’s formed around a set of ugly hacks that can’t be abstracted away.
But that’s the thing – the web you talk of is little more than ruins that people like me or you visit and try to imitate. It has become irrelevant that the web “wasn’t meant” for most of the things it is being used today, it has been revamped and pushed towards a direction where it retroactively has been made to seem as though it had been made for it, it’s history rewritten.
It’s not clean, but with the prospect of a browser engine monopoly by groups like Google, and their interests, this might get improved on for better or worse.
The point still stands that I have made elsewhere that the web has been the most fertile space for the real needs of a computer users, both “consumers” as well as marketing and advertisement firms – one can’t just forget the last two, and their crucial role in building up the modern web into something more or less workable.
If we had wanted the web to be “pretty” and “clean” I think it would have had to have grown a lot more organically, slower and controlled. Instead of waiting for AOL, yahoo and gmail to offer free email, this should have been locally organised, instead of having investors thrown into a space where the quickest win, technologies and protocols should have had time to ripen, instead of having millions of people buying magical computer boxes that get better with every update, they should have had to learn what it is they are using – having have said that, I don’t believe this would have been in any way practical. Beauty and elegance don’t seem to scale to this level.
The lack of cleanliness isn’t merely a headache for pedants & an annoyance for practitioners: it makes most tasks substantially more difficult (like 100x or 1000x developer effort), especially maintenance-related tasks, and because the foundations we’re building upon are shoddily designed with unnecessary abstraction leakages & inappropriate fixations, certain things we need to do simply can’t be done reliably. We’re plugging the holes in the boat with newspaper, and that’s the best we can do.
Scaling is hard, and I don’t expect a large system to have zero ugly spots. But, complication tends to compound upon itself. For the first 15-20 years of the web, extremely little effort was put into trying to make new features or idioms play nice with the kinds of things that would come along later – in other words, everything was treated like a hack. Tiny differences in design make for huge differences in complexity after a couple iterations, and everything about the web was made with the focus on immediate gains, so as a result, subsequent gains come slower and slower. Had the foundational technologies been made with care, even with the kind of wild live-fast-die-young-commit-before-testing dev habits that the web generation is prone to, the collapse we are currently seeing could have been postponed by 20 or 30 years (or even avoided entirely, because without the canonization of postel’s law and “rough consensus and running code” we could have avoided ossifying ugly hacks as standards).
It confuses me that you need to switch registries to make this work. Will GitHub proxy all packages from npm? Otherwise, how would you use some from npmjs.com and others from GH in your projects?
using scoped packages, each “scope” can have it’s own url/registry https://docs.npmjs.com/misc/scope#associating-a-scope-with-a-registry
Wooooo, neo-nazis on the blockchain
If you really want it to be that you can pretend. But I don’t know why you would feel better doing so.
I mean, his entire schtick is “Neo-Reaction”, he’s defended owning slaves, and the list absolutely goes on from there. I’m not sure why thats controversial. If you want to sift through his oeuvre for more tidbits on what he believes by all means, but denying that he believes in all different kinds of hierarchy and especially racial hierarchy is going to be a problem when you’re done.
If you can give a pithy explanation of what urbit is really about, you’ll be the first in my experience.
There are some cool concepts but it seems like they are melded together to create a solution to some problem which is never clearly stated, unless the problem is “there should be a digital asset akin to land in that it is impossible to create more of it,” which isn’t a problem most people, even most people posting on various programming-oriented messageboards, would find compelling.
“a digital asset akin to land in that it is impossible to create more of it” is actually a really interesting solution to the problem of making digital identities expensive enough to discourage spam, and also to the problem of funding the development of a social network before the social network has gotten large enough to be obviously valuable. Certainly not the only such solution, but a solution. I’d actually like to see other projects that have nothing to do with Urbit try out their own spins on the idea of cryptographically-verified digital land ownership.
This post almost makes zero sense… apart from “buy my product” it’s pretty much a series of unconnected dots and buzzwords masquerading as cohesive argument for… something?!
tens of millions of requests every month ~ 4 requests per second…
Or just … don’t register them in the first place because they’re emblematic of everything that’s wrong with western colonialism: https://gigaom.com/2014/06/30/the-dark-side-of-io-how-the-u-k-is-making-web-domain-profits-from-a-shady-cold-war-land-deal/
So what’s the problem here, that some how the internet top level .io is some god-given right to some people that were born on an island? That if you were standing around on a piece of land then that land is yours from today till the end of days and that also extends to a particular set of two-characters that somebody else who invented and implemented a network had decided would be assigned some level of relevance to said piece of land? Is it also an injustice if i somehow implemented my own domain resolution system and don’t give up ‘io’ to these islanders? Do you have a list of strings that I need to give to people so I don’t oppress them?
The amount of desperation that people have to fall for the ‘poor-innocent-uncivilized-islanders-slash-villagers-being-oppressed-by-evil-white-people’ narrative continues to boggle to mind.
how fucking stupid or wilfully racist do you have to be to not see that ‘poor-innocent-uncivilized-islanders-slash-villagers-being-oppressed-by-evil-white-people’ is quite literally the exact thing that happened here?
Colonialism, you say?
Don’t you think something like “everything that’s wrong with governments deciding who gets to profit from what” would have been more accurate?
All domain names are sold through some kind of state-enforced monopoly or cartel, which is also what enables the scourge that is domain squatting.
What’s colonialism got to do with this, besides that the islands were colonized by a .. well, there’s that word again.
It’s really 2 companies in terms of analytics…
Optimizely is A/B testing as well.
It is a shame that this paper seems not to address the privacy implications of what information could be deduced by the external service providers (and their apps) through the use of procrastinator. and/or what private information ‘procrastinator’ would need to function as intended.
I’m not really sure what this article is trying to say or do other than just be marketing puff piece?
The title is funny, though. I give it the win on mocking the “serverless” term.
Here’s a script to help with the backwards incompatible configuration changes: https://github.com/fgsch/varnish3to4
Note to self: never ever disrespect your open source projects' users like that. Either provide a stable public API or the tools to automate the migrations.
Isn’t that what major versions are for?
In my ideal world major versions are for major features, not major breakage.
And stay stuck with mistakes forever? Is the only solution a fork? Majors aren’t meant to be backwards-compatible.
Bug fixing and API stability are orthogonal concepts, aren’t they? Unless you mean mistakes in the API itself in which case I’ll remind you that no one really suffers from the typo in “HTTP referer”. And for more serious API design mistakes the proper punishment would be to keep maintaining the old version forever in addition to the new stuff you just added.
Well, semver would disagree with you about never breaking backwards compat on API.
http://man7.org/tlpi/api_changes/#Linux-3.0
That was the exception and not the example. Torvalds just thought there were enough features to warrant a new release and 2.6.40 seemed to be pushing it. http://thread.gmane.org/gmane.linux.kernel/1147415
Well, in my ideal world (which someone thought it’s “-1 incorrect” :-) ) major versions for major features is the rule, not the exception. The Linux kernel is a good example of (public) API stability and we should learn from it.
Hi I’m James Butler, Engineer who occasionally writes appalling javascript and does operations stuff when no one more grown up is looking.