I really feel that there is still a lot of questions and practices that still need evolving around the release management and deployment world in general. I didn’t find anybody yet telling me that they feel their deployment workflow is super solid, simple enough for new-commers to plug to, and easy to manage. I think that there’s still a long road into this area that needs to be done.
Really really cool. I also like the linked 4K picture suitable for use as background image. Awesome stuff.
I’m not sure I’m sold on this. I get why people building infrastructure software like redis might want this. Yes, it helps them keep the “Foo as a Service” market as a captive income stream without competition from AWS, et al. At the same time it seems like for any service of much worth, it’s going to get cloned by the big providers anyway, and then you have a proliferation of similar but incompatible closed-source versions. I’m not convinced that is necessarily good for the community at large.
I think it’s just a protection to avoid a Redis as a service launched with plain redis and few bits here and there to make the offering work. Big players can obviously clone it and have theirs, but at least most small to middle size players are eliminated. (From what I understand).
You can still start Redis as a Service companies. I was shocked at first because I thought this concerned Redis and their aim was to kill all of the Redis as a Service providers which already exist. But it turns out Redis Core is unaffected by this, only some modules are.
I don’t really know what they intend to achieve with this, except having people avoid using their modules…
Which doesn’t seem worthwhile, as the big players are the ones most likely to be able to market and monetise a service based on core Redis plus their own proprietary add-ons. It’s pretty difficult to compete with AWS on any front at this stage, given their massive resources and the “nobody ever gets fired for buying X” safety of big brands.
Boxing out only the small players doesn’t really feel like it’s going to preserve a whole bunch of market or mindshare for the Redis company.
I’m not into business very much, so I cannot evaluate if this operation is worth it or not, I would just assume that they were going for the long tail, which can be a sufficient number of clients to have decent revenue and continue to work on the Redis company.
In reality I don’t have the feeling that a “long tail” actually exists for a lot of these types of services. I base this on the Firebase/Parse era when there were loads of “backend as a service” companies around that have all withered away (my understanding at least). With only google/Firebase remaining. I personally was surprised by this.
I don’t understand why this matters. Both Windows and Mac versions can still be downloaded from the docker website without logging in:
I found those by googling “docker for $OS”. The Mac page was the top result and the windows was the third.
I searched docker for windows and it took me to this page. Which asks for a login to download. I think the big deal is how dishonest the reply from the docker team is.
“we’ve made this change to make sure we can improve the Docker for Mac and Windows experience for users moving forward.”
This is such obvious marketing BS and it’s insulting that they think the average developer doesn’t know this is so they can send more marketing emails and not to “Improve experiences”.
In their defense, it takes money to improve the experience, and marketing yields money. So indirectly, marketing allows them to improve the experience. I entirely agree that they should just come out and say that, however.
I love this reasoning! I wonder where else they could improve.
I think funneling more docker users into Enterprise plans would be big $$$, maybe they could cap the number of images downloaded from the store for free, and then sell licences for more downloads.
Wikimedia Foundation, the non-profit organization behind Wikipedia (Alexa top 5) as well as all sister projects such as Wiktionary, Wikiquote,.. is hiring Site Reliability Engineers, Application Security Engineers and more. All positions in San Francisco or remote.
We (Amazon Web Service Elastic Filesystem) are!
https://www.amazon.jobs/en/jobs/703035/software-development-engineer-ii
Don’t believe the hype. Working for Amazon has been a literal life changer for me. Nothing is ever perfect, and this place is no exception, but there’s plenty of awesome around here and we work at a scale that few can match. The job is full of challenges and it’s a VERY different day to day experience from any company I’ve ever worked, but I love it.
Most of our work is Java or C/C++ and a bunch of Python on the infrastructure side.
Feel free to list me (cpatti at amazon dot com) as a referral if you apply, and let me know so I can connect the dots internally :)
Hey @feoh! I’ve several times tried to apply for an SRE where I live but NEVER got any answer back. My profile is probably still a bit too young (4years exp), but I’m looking for great environment and teams to learn from. Would you have any idea about the profile matching this kind of job @ Amazon?
It depends very much on the job level of the job in question.
Also I don’t exactly know what “SRE” maps to in Amazon-ese :) My job title is “System Development Engineer” and that’s a good guess, but I’m not sure.
If it’s a SysDE role, things we look for generally are:
Solid coding ability: You need to be able to implement simple algorithms and solve common systems problems in code. In practice this means you should know an actual programming language, not just bash, and be able to demonstrate that with a simple collaborative coding task.
System design at scale
A functional understanding of networking
And then there are the less technical areas like our Leadership Principles. Definitely do some thinking on those and how each might apply to various situations in your career.
As to finding a way in - network! Amazon has a sizable presence on LinkedIn. Reach out and politely ask quesitons of people, and don’t be afraid to be persistent. People are busy and may not get to you right away. Just be respectful of the fact that you’re asking for a leg up and you’ll be surprised at the response you might get.
Good luck!
[Note - I’m not speaking for my employer, just giving you my impressions of what we tend to look for in this one particular area.]
Thank you so much for this comprehensive answer! That’s super helpful and I’ll definitely give a try!
Every once in a while I get poked at by an Amazon recruiter on LinkedIn. Usually, I say it sounds awesome but I’m not willing to relocate, and I never hear back. :P
I hear you. It was like that here for a long time too, and then around 5-6 years ago our director pitched a Boston office to the Seattle management chain and it worked. Now we’re booming.
It’s kind of frustrating how cavalier some recruiters are about locating. My answer usually shuts them up “My wife is a VP at a bank, makes more than me, and has held the same job for 15 years. There is NO way we’re gonna give that up.”
Recruiters seem to believe, and in the aggregate they’re correct if only because it’s a self-fulfilling prophecy, that anyone who would answer their unsolicited emails can’t afford to be picky.
I’ve wondered about that. Like, as in, what is their ACTUAL success rate? I get the impression that tech recruiting is one of those fields like real-estate. There WAS mad money to be made for a while so a lot of people got into it. But these days, with the web and with much better networking all around.
It’s hard to tell. I expect that some of the larger “hiring” websites have some data on it for their own purposes, but for the rest of us, I don’t see any way to find out.
That is a very reductionist view of what people use the web for. And I am saying this as someone who’s personal site pretty much matches everything prescribed except comments (which I still have).
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Btw, Medium, given as a positive example, is not in any way minimal and certainly not by metrics given in this article.
Chickenshit minimalism: https://medium.com/@mceglowski/chickenshit-minimalism-846fc1412524
I wouldn’t say medium even gives the illusion of simplicity (For example, on the page you linked, try counting the visual elements that aren’t blog post). Medium seems to take a rather contrary approach to blogs, including all the random cruft you never even imagined existed, while leaving out the simple essentials like RSS feeds. I honestly have no idea how the author of the article came to suggest medium as an example of minimalism.
I agree with your overall point, but Medium does provide RSS feeds. They are linked in the <head> and always have the same URL structure. Any medium.com/@user has an RSS feed at medium.com/feed/@user. For Medium blogs hosted at custom URLs, the feed is available at /feed.
I’m not affiliated with Medium. I have a lot of experience bugging webmasters of minimal websites to add feeds: https://github.com/issues?q=is:issue+author:tfausak+feed.
That is a very reductionist view of what people use the web for.
I wonder what Youtube, Google docs, Slack, and stuff would be in a minimal web.
YouTube, while not as good as it could be, is pretty minimalist if you disable all the advertising.
I find google apps to be amazingly minimal, especially compared to Microsoft Office and LibreOffice.
Minimalist Slack has been around for decades, it’s called IRC.
It is still super slow then! At some point I was able to disable JS, install the Firefox “html5-video-everywhere” extension and watch videos that way. That was awesome fast and minimal. Tried it again a few days ago, but didn’t seem to work anymore.
Edit: now I just “youtube-dl -f43 ” directly without going to YouTube and start watching immediately with VLC.
The youtube interface might look minimalist, but under the hood, it is everything but. Besides, I shouldn’t have to go to great lengths to disable all the useless stuff on it. It shouldn’t be the consumer’s job to strip away all the crap.
In a minimal web, locally-running applications in browser sandboxes would be locally-running applications in non-browser sandboxes. There’s no particular reason any of these applications is in a browser at all, other than myopia.
Distribution is dead-easy for websites. In theory, you have have non-browser-sandboxed apps with such easy distribution, but then what’s the point.
Non-web-based locally-running client applications are also usually made downloadable via HTTP these days.
The point is that when an application is made with the appropriate tools for the job it’s doing, there’s less of a cognitive load on developers and less of a resource load on users. When you use a UI toolkit instead of creating a self-modifying rich text document, you have a lighter-weight, more reliable, more maintainable application.
The power of “here’s a URL, you now have an app running without going through installation or whatnot” cannot be understated. I can give someone a copy of pseudo-Excel to edit a document we’re working together on, all through the magic of Google Sheet’s share links. Instantly
Granted, this is less of an advantage if you’re using something all the time, but without the web it would be harder to allow for multiple tools to co-exist in the same space. And am I supposed to have people download the Doodle application just to figure out when our group of 15 can go bowling?
They are, in fact, downloading an application and running it locally.
That application can still be javascript; I just don’t see the point in making it perform DOM manipulation.
As one who knows JavaScript pretty well, I don’t see the point of writing it in JavaScript, however.
A lot of newer devs have a (probably unfounded) fear of picking up a new language, and a lot of those devs have only been trained in a handful (including JS). Even if moving away from JS isn’t actually a big deal, JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language – you can do whatever you do in JS in python or lua or perl or ruby and it’ll come out looking almost the same unless you go out of your way to use particular facilities.
The thing that makes JS code look weird is all the markup manipulation, which looks strange in any language.
JS (as distinct from the browser ecosystem, to which it isn’t really totally tied) is not fundamentally that much worse than any other scripting language
(a == b) !== (a === b)
but only some times…
Javascript has gotchas, just like any other organic scripting languages. It’s less consistent than python and lua but probably has fewer of these than perl or php.
(And, just take a look at c++ if you want a faceful of gotchas & inconsistencies!)
Not to say that, from a language design perspective, we shouldn’t prize consistency. Just to say that javascript is well within the normal range of goofiness for popular languages, and probably above average if you weigh by popularity and include C, C++, FORTRAN, and COBOL (all of which see a lot of underreported development).
Web applications are expected to load progressively. And that because they are sandboxed, they are allowed to start instantly without asking you for permissions.
The same could be true of sandboxed desktop applications that you could stream from a website straight into some sort of sandboxed local VM that isn’t the web. Click a link, and the application immediately starts running on your desktop.
I can’t argue with using the right tool for the job. People use Electron because there isn’t a flexible, good-looking, easy-to-use cross-platform UI kit. Damn the 500 mb of RAM usage for a chat app.
There are several good-looking flexible easy to use cross-platform UI kits. GTK, WX, and QT come to mind.
If you remove the ‘good-looking’ constraint, then you also get TK, which is substantially easier to use for certain problem sets, substantially smaller, and substantially more cross-platform (in that it will run on fringe or legacy platforms that are no longer or were never supported by GTK or QT).
All of these have well-maintained bindings to all popular scripting languages.
QT apps can look reasonably good. I think webapps can look better, but I haven’t done extensive QT customization.
The bigger issue is 1) hiring - easier to get JS devs than QT devs 2) there’s little financial incentive to reduce memory usage. Using other people’s RAM is “free” for a company, so they do it. If their customers are in US/EU/Japan, they can expect reasonably new machines so they don’t see it as an issue. They aren’t chasing the market in Nigeria, however large in population.
Webapps are sort of the equivalent of doing something in QT but using nothing but the canvas widget (except a little more awkward because you also don’t have pixel positioning). Whatever can be done in a webapp can be done in a UI toolkit, but the most extreme experimental stuff involves not using actual widgets (just like doing it as a webapp would).
Using QT doesn’t prevent you from writing in javascript. Just use NPM QT bindings. It means not using the DOM, but that’s a net win: it is faster to learn how to do something with a UI toolkit than to figure out how to do it through DOM manipulation, unless the thing that you’re doing is (at a fundamental level) literally displaying HTML.
I don’t think memory use is really going to be the main factor in convincing corporations to leave Electron. It’s not something that’s limited to the third world: most people in the first world (even folks who are in the top half of income) don’t have computers that can run Electron apps very well – but for a lot of folks, there’s the sense that computers just run slow & there’s nothing that can be done about it.
Instead, I think the main thing that’ll drive corporations toward more sustainable solutions is maintenance costs. It’s one thing to hire cheap web developers & have them build something, but over time keeping a hairball running is simply more difficult than keeping something that’s more modular running – particularly as the behavior of browsers with respect to the corner cases that web apps depend upon to continue acting like apps is prone to sudden (and difficult to model) change. Building on the back of HTML rendering means a red queen’s race against 3 major browsers, all of whom are changing their behaviors ahead of standards bodies; on the other hand, building on a UI library means you can specify a particular version as a dependency & also expect reasonable backwards-compatibility and gradual deprecation.
(But, I don’t actually have a lot of confidence that corporations will be convinced to do the thing that, in the long run, will save them money. They need to be seen to have saved money in the much shorter term, & saying that you need to rearchitect something so that it costs less in maintenance over the course of the next six years isn’t very convincing to non-technical folks – or to technical folks who haven’t had the experience of trying to change the behavior of a hairball written and designed by somebody who left the company years ago.)
I understand that these tools are maintained in a certain sense. But from an outsider’s perspective, they are absolutely not appealing compared to what you see in their competitors.
I want to be extremely nice, because I think that the work done on these teams and projects is very laudable. But compare the wxPython docs with the Bootstrap documentation. I also spent a lot of time trying to figure out how to use Tk, and almost all resources …. felt outdated and incompatible with whatever toolset I had available.
I think Qt is really good at this stuff, though you do have to marry its toolset for a lot of it (perhaps this has gotten better).
The elephant in the room is that no native UI toolset (save maybe Apple’s stack?) is nowhere near as good as the diversity of options and breadth of tooling available in DOM-based solutions. Chrome dev tools is amazing, and even simple stuff like CSS animations gives a lot of options that would be a pain in most UI toolkits. Out of the box it has so much functionality, even if you’re working purely vanilla/“no library”. Though on this points things might have changed, jQuery basically is the optimal low-level UI library and I haven’t encountered native stuff that gives me the same sort of productivity.
I dunno. How much of that is just familiarity? I find the bootstrap documentation so incomprehensible that I roll my own DOM manipulations rather than using it.
TK is easy to use, but the documentation is tcl-centric and pretty unclear. QT is a bad example because it’s quite heavy-weight and slow (and you generally have to use QT’s versions of built-in types and do all sorts of similar stuff). I’m not trying to claim that existing cross-platform UI toolkits are great: I actually have a lot of complaints with all of them; it’s just that, in terms of ease of use, peformance, and consistency of behavior, they’re all far ahead of web tech.
When it comes down to it, web tech means simulating a UI toolkit inside a complicated document rendering system inside a UI toolkit, with no pass-throughs, and even web tech toolkits intended for making UIs are really about manipulating markup and not actually oriented around placing widgets or orienting shapes in 2d space. Because determining how a piece of markup will look when rendered is complex and subject to a lot of variables not under the programmer’s control, any markup-manipulation-oriented system will make creating UIs intractably awkward and fragile – and while Google & others have thrown a great deal of code and effort at this problem (by exhaustively checking for corner cases, performing polyfills, and so on) and hidden most of that code from developers (who would have had to do all of that themselves ten years ago), it’s a battle that can’t be won.
It annoys me greatly because it feels like nobody really cares about the conceptual damage incurred by simulating a UI toolkit inside a doument renderer inside a UI toolkit, instead preferring to chant “open web!” And then this broken conceptual basis propagates to other mediums (VR) simply because it’s familiar. I’d also argue the web as a medium is primarily intended for commerce and consumption, rather than creation.
It feels like people care less about the intrinsic quality of what they’re doing and more about following whatever fad is around, especially if it involves tools pushed by megacorporations.
Everything (down to the transistor level) is layers of crap hiding other layers of different crap, but web tech is up there with autotools in terms of having abstraction layers that are full of important holes that developers must be mindful of – to the point that, in my mind, rolling your own thing is almost always less work than learning and using the ‘correct’ tool.
If consumer-grade CPUs were still doubling their clock speeds and cache sizes every 18 months at a stable price point and these toolkits properly hid the markup then it’d be a matter of whether or not you consider waste to be wrong on principle or if you’re balancing it with other domains, but neither of those things are true & so choosing web tech means you lose across the board in the short term and lose big across the board in the long term.
Youtube would be a website where you click on a video and it plays. But it wouldn’t have ads and comments and thumbs up and share buttons and view counts and subscription buttons and notification buttons and autoplay and add-to-playlist.
Google docs would be a desktop program.
Slack would be IRC.
What you’re describing is the video HTML5 tag, not a video sharing platform. Minimalism is good, I do agree, but don’t mix it with no features at all.
Google docs would be a desktop program.
This is another debate around why using the web for these kind of tasks, not the fact that it’s minimalist or not.
The problem turns out to be some obscure FUSE mounts that the author had lying around in a broken state, which subsequently broke the kernel namespace system. Meanwhile, I have been running systemd on every computer I’ve owned in many years and have never had a problem with it.
Does this not seem a bit melodramatic?
From the twitter thread:
Systemd does not of course log any sort of failure message when it gives up on setting up the DynamicUser private namespace; it just goes ahead and silently runs the service in the regular filesystem, even though it knows that is guaranteed to fail.
It sounds like the system had an opportunity to point out an anomaly that would guide the operator in the right direction, but instead decided to power through anyways.
A lot like continuing to run in a degraded state is a plague that affects distributed systems. Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.
At $work we prefer degraded mode for critical systems. If they go down we make no money, while if they kind of sludge on we make less but still some money while we firefight whatever went wrong this time.
My belief is that inevitably you could be making $100 per day, would notice if you made $0, but are instead making $10 and won’t notice this for six months. So be careful.
We have monitoring and alerting around how much money is coming in, that we compare with historical data and predictions. It’s actually a very reliable canary for when things go wrong, and for when they are right again, on the scale of seconds to a few days. But you are right that things getting a little suckier slowly over a long time would only show up as real growth not being in line with predictions.
I tend to agree that hard failures are nicer in general (especially to make sure things work), but I’ve also been in scenarios where buggy logging code has caused an entire service to go down, which… well that sucked.
There is a justification for partial service functionality in some cases (especially when uptime is important), but like with many things I think that judgement calls in that are usually so wrong that I prefer hard failures in almost all cases.
Everybody thinks it’s a good idea “some service is surely better than no service” until it happens to them.
So if the server is over capacity, kill it and don’t serve anyone?
Router can’t open and forward a port, so cut all traffic?
I guess that sounds a little too hyperbolic.
But there’s a continuum there. At $work, I’ve got a project that tries to keep going even if something is wrong. Honest, I’m not sure I like how all the errors are handled. But then again, the software is supposed to operate rather autonomously after initial configuration. Remote configuration is a part of the service; if something breaks, it’d be really nice if the remote access and logs and all were still reachable. And you certainly don’t want to give up over a problem that may turn out to be temporary or something that could be routed around… reliability is paramount.
And you certainly don’t want to give up over a problem that may turn out to be temporary
I think that’s close to the core of the problem. Temporary problems recur, worsen, etc. I’m not saying it’s always wrong to retry, but I think one should have some idea of why the root problem will disappear before retrying. Computers are pretty deterministic. Transient errors indicate incomplete understanding. But people think a try-catch in a loop is “defensive”. :(
So you never had legacy systems (or configurations) to support? I read Chris’ blog regularly, and he works at a university on a heterogeneous network (some Linux, some other Unix systems) that has been running Unix for a long time. I think he started working there before systemd was even created.
Why do you say that the FUSE mounts were broken? As far as we can see they were just set up in a uncommon way https://twitter.com/thatcks/status/1027259924835954689
It does look brittle that broken fuse mounts prevent the ntpd from running. IMO the most annoying part is the debugability of the issue.
Yes, it seems melodramatic, even to my anti-systemd ears. It’s a documentation and error reporting problem, not a technical problem, IMO. Olivier Lacan gave a great talk last year about good errors and bad errors (https://olivierlacan.com/talks/human-errors/). I think it’s high time we start thinking about how to improve error reporting in software everywhere – and maybe one day human-centric error reporting will be as ubiquitous as unit testing is today.
In my view (as the original post’s author) there are two problems in view. That systemd doesn’t report useful errors (or even notice errors) when it encounters internal failures is the lesser issue; the greater issue is that it’s guaranteed to fail to restart some services under certain circumstances due to internal implementation decisions. Fixing systemd to log good errors would not cause timesyncd to be restartable, which is the real goal. It would at least make the overall system more debuggable, though, especially if it provided enough detail.
The optimistic take on ‘add a focus on error reporting’ is that considering how to report errors would also lead to a greater consideration of what errors can actually happen, how likely they are, and perhaps what can be done about them by the program itself. Thinking about errors makes you actively confront them, in much the same way that writing documentation about your program or system can confront you with its awkward bits and get you to do something about them.
In Canada most home routers (well, from bell at least, which is one of two dominant ISPs) come with a long randomly generated wifi password stamped on them.
Specifically 8 characters long. And for no apparent reason it is limited to hex ([0-9A-F]{8}). Creating about 4 billion passwords. It takes about a day on my gtx970m to try every single one against a captured handshake.
The defaults ESSID’s (wifi network names) are of the form BELL###. So there are a thousand extremely common ESSID’s. Apparently WPA only salts the password with the ESSID before hashing it and publicly broadcasting it as part of the handshake. In a few years of computation time on a decent laptop (so far less if I rented some modern gpus from google…) I could make rainbow tables for every one of those IDs that included every possible default password.
On the bright side it looks like this new method extracts a hash that includes the mac addresses acting as a unique salt, so at least the rainbow table method will still require capturing a handshake.
I never had this realization. Now my head has exploded.
What tool do you use to try these combinations? And is it heavily parallelized? To me 4 billion should not take a whole day…
I experimented with pyrit (24h runtime, builds some form of rainbow table, wrote a short program to pipe it all the passwords) and hashcat (20h runtime, no support for rainbow tables, supports generating the password combinations by itself via command line flags). They are both heavily parallelized, 100% utilization of my GPU.
My GPU is a relatively old GPU in a laptop with shitty cooling, which may contribute to the runtime.
Running on a CPU it said it would take the better part of a month.
Interesting. While waiting for a reply, I thought to myself: I wonder how much it would cost to run it on Google Compute with the best hardware. Could be worth it to those who want wifi for a week or longer without paying anything. Spooky.
In Luxembourg every (Fritz)box comes with a password written only on the notice (not on the box itself) that is 20 (5*4chars) in hexa. It’s a pain to type at first, but well, it’s seem like a good one.
Everytime I see a post for Nim I am hoping for a Golang competitor that can actually bring something new to the table. But then I look at the library support and community and walk back disappointed. I am still hoping for nim to take off and attract Python enthusiasts like me to a really fast compiled language.
But then I look at the library support and community and walk back disappointed.
It’s very hard to get the same momentum that Go achieved, just by the sheer fact that it is supported and marketed by Google. All I can say is: please consider helping Nim grow its community and library support, if everyone sees a language like Nim and gives up because the community is small then all new mainstream languages will be owned by large corporations like Google and Apple. Do you really want to live in a world like that? :)
How about https://crystal-lang.org
Have tried it; GC is way to optimistic so under high loads you would see memory being wasted. I love the syntax and power of language but it still stands shy when you can’t compile single binary (like golang) and end up with weird cross compile issues. Nim is way more efficient in terms of memory and GC overhead.
Let me rephrase; binary is not standalone with everything static linked (LibSSL and some dependencies). I had to recompile my binaries on server to satisfy the dynamic linked libraries with particular version.
I think that’s more a result of Go having the manpower to develop and maintain an SSL library written in Go. As far as I understand, if you were to write an SSL library in 100% Crystal you wouldn’t have this problem.
By the way, Nim goes a step further. Because it compiles to C you can actually statically embed C libraries in your binary. Neither Go nor Crystal can do this as far as I know and it’s an awesome feature.
Is there a distinction between “statically embed C libraries in your binary” and “statically link with C libraries”? Go absolutely can statically link with C libraries. IIRC, Go will still want to link with libc on Linux if you’re using cgo, but it’s possible to coerce Go into producing a full static executable—while statically linking with C code—using something like go install -ldflags "-linkmode external -extldflags -static".
There is a difference. Statically linking with C libraries requires a specially built version of that library: usually in the form of a .a or .lib file.
In my experience, there are many libraries out there which are incredibly difficult to statically link with, this is especially the case on Windows. In most cases it’s difficult to find a version of the library that is statically linkable.
What I mean by “statically embed C libraries in your binary” is: you simply compile your program’s C sources together with the C sources of all the libraries you depend on.
As far as Go is concerned, I was under the impression that when you’re creating a wrapper for a C library in Go, you are effectively dynamically linking with that library. It seems to me that what you propose as a workaround for this is pretty much how you would statically compile a C program, i.e. just a case of specifying the right flags and making sure all the static libs are installed and configured properly.
You have to jump through quite a few hoops to get dynamic linking in go.
By default it statically links everything, doesn’t have a libc, etc.
It’s not uncommon or difficult in go to compile a webapp binary that bakes all assets (templates, images, etc) into the binary along with a webserver, HTTPS implementation (including provisioning its own certs via ACME / letsencrypt), etc.
There are different approaches, https://github.com/GeertJohan/go.rice for example supports 3 of them (see “tool usage”)
I think he mentions the ability to statically build [1] binaries in Golang. I’d note that this is a feature that is not so common and hard to achieve. You can do this with C/C++ (maybe Rust), but it has some limits, and it’s hard to achieve with big libraries. Not having statically built binaries often means that you need a strong sense of what you need and to what point or using good packaging/distribution workflows (fpm/docker/…).
It’s a super nice feature when distributing software (for example tooling) to the public, so it feels like “here you are your binary, you just have to use it”.
The “programming by duct taping 30 pip packages together” method of development is pretty new, and it isn’t the only way to program. Instead, you grow the dependencies you need as you build your app, and contribute them back once they’re mature enough.
More time consuming, but you have total control.
Despite having the console open in another tab, having used it recently, this was the hardest quiz I’ve taken in a while.
I guess that many people don’t even use the console (by using terraform or Amazon’s equivalent), that would explain the results.
If it would have been Google, would people react the same way? I feel there’s a constant hanger around Microsoft, that is generally localized around its OS, but the Azure teams and this acquisition is showing great advances for Microsoft and hopefully a better future for everyone!
I think people would be much more upset if it had been Google, but really both companies have historically short attention spans.
The problem is that products which would be great successes if run as independent entities are frequently seen as distractions or failures inside large corporations like Google or Microsoft. And if they do decide to maintain an acquired product, the amount of value they need to juice from it is dramatically higher than would be needed by an independent org.
What would you prefer it to use as the underlying storage? (I am trying to understand what people actually want.)
I was thinking of storing everything, including the comments in a git instance, which would work independently of what git frontend you are using, but then I would have to speak git protocol from the browser which sucks. I may have a look at git.js
Looking at git.js documentation :(
“I’ve been asking Github to enable CORS headers to their HTTPS git servers, but they’ve refused to do it. This means that a browser can never clone from github because the browser will disallow XHR requests to the domain.”
Anything self-hosted would be viable, but everything on git would be even better, although probably more complicated. We use gerrit at work (which sucks at several levels), and mostly anything third-party is very much disallowed. Maybe you could create an abstraction that would speak Github API to github and git protocol to other servers where this would work?
The other possibility could be a sort of optional backend/proxy, so, if the git server doesn’t have CORS, you could spin that optional server.
After thinking about it some more, there’s a lot that GitHub offers that I would have to reimplement myself. Authentication, for one thing. If it was used in a stand-alone mode in enterprise, some kind of authentication would be still needed. People would probably want SSO. Then there are notifications. GitHub sends you an email when you are mentioned in a bug. I would have to somehow interact with company’s mail server. And so on. This is my hobby project and I don’t really have time to go into that amount of complexity.
The only issue that I have with it is sharing my organization details. Although you could do it manually, I’m always a bit annoyed about this.
?? GitLab is moving from Azure (Microsoft) to Google Cloud, and they’re announcing the unavailability in these places as “because Google cloud”. What’s the difference between Azure and Google?
Azure was available there, even though Microsoft is also a US company?? How?
Microsoft and Google are government level companies; they work with and for governments. That means that sometimes they will have some advantages somewhere and sometimes have to give up some other. Which explains probably the difference between them.
As a Cuban (not living there right now) not really excited with this news. I used gitlab in the past, and still use it right now for personal projects. I experienced something similar with Bitbucket a few years ago, when they went public, at least Gitlab has posted some news about it, Bitbucket closed the access, without any warnings.
The amount of pearl-clutching in this Twitter thread is barely short of astounding. People have some quaint assumptions about how software services collect and use usage data that originates inside the platform. What service wouldn’t want to measure how people use it? What company Spotify’s size got there without measuring things?
Spotify’s discovery weekly playlist, trending charts, song/album-based radio, daily genre-based playlists, “popular in this location” charts, artist suggestions, playlist auto-extend - all of these would not be possible without data collection. And these are actually things I use and help me find music from around the world I wouldn’t otherwise. The main thing is that there be enforcement inside Spotify that none of the employees are peeking at the data of a particular user. We need regular inspections and certifications for this sort of a thing. I’m fine with anonymized data analysis. (And de-anonymization falls under “not being able to access the data of a particular user”.)
You must be a creator! Please don’t disdain the consumers’ responses. Just… observe. This is important.
To my eye, a good outcome of these GDPR Data Extracts will be that consumers/users demand control of their devices!
I’ve been trying to tell people for years what it means, what actually happens, when they use ‘free’ services every waking moment. They use them casually, and excitedly, and while in mourning, and on that day they fell in love.
Spotify knows when you’re drunk. (Well, they might, if they figured out how to extract that.)
Google knows when you’re aroused (Well, they might, if they figured out how to extract that)–and if you use their communication services, or Android, they might even put two and two together and figure out if you have a crush on any specific person in your contacts list…
Don’t get me started on politics!
I remember, about exactly twenty years ago, I got in the habit of running tcpdump constantly, and through that I identified which programs would talk on the network. In those days, my system didn’t generate a constant stream of traffic when it was idle, so it was easy. I was offended to see some apps “phone home” when there was clearly no benefit to me, the user. Since I had full control of my PC in 1998, I could block it, intercept it, delete the app, hexedit the app, whatever I liked.
You know what I mean? It’s one thing for Spotify to tell their app to tell your phone to send all that data. It’s a whole different thing that your phone DID it and does not offer you an audit mechanism!
On the other hand some consumers give their data knowing that it’s an exchange for features they want.
Many people like FB’s targeted ads!
No mention of how to re-encrypt an existing key with the better -o scheme; seems like a strange omission. The invocation is ssh-keygen -p -o -f keyfile if you were curious.
I guess the point is more about unsafe defaults than how to rekey. But nonetheless, it’s present:
You can upgrade existing keys with ssh-keygen -p -o -f PRIVATEKEY.
Maybe an edit?
aws kms create-key
I think that costs a little bit of money? IIRC I saw a dollar or so from KMS on my bill… Damn Bezos :D
Anyway, fetching from a service that always reveals the secrets to your machine is not that different from a local file on that machine. It’s good against random people trying disk recovery on a volume (though that’s not a problem on EC2, they do wipe all data) and accidental backups/snapshots (“your VM is cast into an AMI” as you mentioned), but fundamentally it’s still secrets-on-your-machine.
I think it might be reasonable for these services to support additional protection — not just based on secrets, but also, say, IP address whitelisting. So that if someone gets your secrets from an AMI you accidentally included secrets into and made public, they couldn’t access your accounts because they’re not accessing from your machines.
costs a little bit of money
Unfortunately, yes. As said below, it’s $1/mo. It’s unfortunate they charge for it, but most of the AWS accounts I’m privvy to are spending a couple hundred a month so it goes unnoticed.
but also, say, IP address whitelisting
Absolutely! I use an IAM instance role to allow retrieving and decrypting credentials, so the machine/container has to be launched in AWS and under the right role to allow retrieving credentials.
With clouds being so dynamic, IP address whitelisting is harder – containers and instances come online at different places by default. Elastic IPs, etc, make some room for lockdown. Fetching secrets at container start, process start, etc is a great way to keep the secrets off the file system, but it’s only one component of the broader picture of protecting your database credentials.
to allow retrieving credentials
I mean, require the right IP/machine to connect to the service the credential is for. I guess you already have that for AWS services :)
Kinda surprised that reddit - a site which hosts rougher parts of the internet - has not had a Head of Security until 2.5 months ago?
Their headcount has always been kinda small I think? You need to hit a certain size before carving out a specific position.
“Kinda small” is ~250 people. They have data of 330 Million users.
I wouldn’t attach the headcount to the position directly, the question is how much a security need you have.
That’s probably not a good way to measure it, but maybe the number of posts like this? But that’s true.
My point is, they could have been regularly infiltrated for years and they only noticed know thanks to new talent in house. There’s only so much a jack of all trades team can do while fire fighting all the needs.
I’ll add to mulander’s hypothetical that this happened in all kinds of big companies with significant investments in security. They were breached for years without knowing they were compromised. They started calling them “APT’s” as a PR move to reduce humiliation. It was often vanilla attacks or combos of those with some methods to bypass monitoring that companies either didn’t have or really under-invested in. Reddit could be one if they had little invested in security or especially intrusion detection/response.
Because reddit is not hosting financial data or (for the most part) deeply personal data that is not already out in the open, I would assume that they are not that interesting a target for hackers looking for financial gain, but more interesting for people script kiddies who are looking to DOX or harass other users.
Many subreddits host content and discussions that people don’t want to be attached to. The post even appreciates that and recommends deletion of those posts.
I find it telling that you go out of your way pushing people interested in gaining personal data in the script kiddie corner. Yes, SMS based attacks are in the range of “a script kiddie could do that”, which makes it even worse.
Criminals are using this type of information for targeted extortions and other activities. The general view that that this is mostly the realm of “script kiddies” detracts from the seriousness and provides good cover for their activities.
I made an assumption, but reading your reply and that of @skade you are right that there are lots of uses for the data from a criminal perspective, especially for a site the size of reddit.
[Comment removed by author]
I doubt your comment help anyone in the community… A little why would be probably enough to help others understand your point of view!
Why using CloudWatch for monitoring alerts but not for log aggregation too? ELK seems like a big thing to setup and maintain. I’d add that most of these things can be deployed and maintained with Terraform and a bunch of Ansible scripts (that should only care about your application).