Given how many times over the years I had journald completely hose itself and freeze apps running on production systems [1] , I don’t find his arguments exceptionally compelling. Far more problems with journald/journalctl than I ever did with various syslog implementations. Yes you can still install syslog, but journald still gets the logs first, and then forwards/duplicates the data to syslog.
Maybe journald is better now? Been a couple of years since I had to deal with it on high volume log systems. At the time we ended up using a program wrapper (something similar to logexec) that sent the logs directly to syslog, and avoided systemd/journald log handling entirely.
[1]: app outputting some log data, journald stops accepting app output, app stdout buffer fills, app freezes blocking on write to stdout
For me it’s quite the opposite, I never had any issues with journald, neither in production nor in development environments.
Seconded, I actually quite like that I can see all my logs the same way without setting up stuff on my side. With syslog I’d have to tell every program where to log and the systemd combo just takes away that manual burden.
I had this experience too, but that was because journald was hanging due to my disks being slow as molasses (I had deeper problems). I’m honestly not sure whether to blame journald for that.
Writing a bunch of Terraform modules to provision a secure Nomad + Consul cluster on Hetzner cloud. Turns out that Terraform is fairly limited and cumbersome in some places. Really cool idea, though!
Is there a comprehensive and/or up-to-date set of recommendations for simple, static HTTP servers anywhere?
After years of trying to lock down Apache, PHP, CMSs, etc. and keep up to date on vulnerabilities and patches, I opted to switch to a static site and a simple HTTP server to reduce my attack surface and the possibility of misconfiguration.
thttpd seems to be the classic option, but I’m a little wary of it due to past security issues apparent lack of maintainance (would be fine if it were “done”, but security issues make that less credible). I’m currently using darkhttpd after seeing it recommended on http://suckless.org/rocks
Edit: I upvoted the third-party hosting suggestions (S3, CloudFlare, etc.) since that’s clearly the most practical; for personal stuff I still prefer self-hosted FOSS though :)
If all you need is static http you don’t have to host it yourself. I host my blog in Amazon S3 (because I wanted to add SSL and GitHub didn’t support that last year) and for the last 13 months it’s costs me about $0.91 / month, and about two thirds of that is Route 53 :-)
AWS gives you free SSL certificates, which was one of the main drivers for me to go with that approach.
I use S3 / CloudFront for static HTTP content. It’s idiot proof (important for idiots like me!), highly reliable, and I spend less every year on it than I spend on a cup of coffee.
The only real security risk I worried about was that someone could DDoS the site and run up my bill, but I deployed a CloudWatch alarm tied to a Lambda to monitor this. It’s never fired. I think at my worst month I used 3% of my outbound budget :)
I’ve always wondered why AWS doesn’t provide a spending limit feature… it can’t be due to a technical reason, right? I know their service is supposed to be more complex, but even the cheapest VPS provider gives you this option, often enabled by default. I can only conclude they decided they don’t want that kind of customer.
I also worried about the risk of “DDoS causing unexpexted cost” when I was looking for a place to host my private DNS zones. To me it appeared that the free Cloudflare plan (https://www.cloudflare.com/plans/) was the best fit (basically free unmetered service).
Would using that same free plan be a safer choice than Cloudfront from a cost perspective?
You’d be hard pressed to go wrong with httpd from the OpenBSD project. It’s quite stable, it’s been in OpenBSD base for a while now. It’s lack of features definitely keeps it in the simple category. :)
There is also NGINX stable branch. it’s not as simple as OpenBSD’s option, but is stable, maintained and is well hardened by being very popular.
In hurricane architecture, they used Nginx (dynamic caching) -> Varnish (static caching) -> HAProxy (crypto) -> optional Cloudfare for acceleration/DDOS. Looked like a nice default for something that needed a balance of flexibility, security, and performance. Depending on one’s needs, Nginx might get swapped for a simpler server but it gets lots of security review.
I’ll also note for OP both this list of web servers.
Check out this.
Yeah, I also like this similar list, but neither provide value judgements about e.g. whether it’s sane to leave such things exposed to the Internet unattended for many years (except for OS security updates).
I had been vaguely aware of Copperhead OS but never looked into it or used it (I used Cyanogenmod before they imploded, and Lineage OS thereafter). I don’t know anything about the context for this other than the reddit and hacker news links here. Everything I’ve seen so far makes me feel inclined to be sympathetic to this Daniel Micay fellow, so I can’t help but wonder if there’s any information from his former business partner’s side of the story that would make me feel less sympathetic.
I also chill in a few old irc channels with strncat post my major arch days, he has a lot of people in the open source community that respect his contributions. My bet is he’ll come out ahead of this if he can get untangled from the copperheados company.
Daniel Micay was a prolific Rust contributor. (In fact, he is still in the top 20 even if he has been inactive since 2015.) In his Rust work, I found him to be a straight person.
I have a good impression of Daniel Micay after talking with him om IRC. He’s also an unusually knowledgeable programmer.
I continue developing genact which is a useless activity generator that does something between pretending to do actual work and looking like a Hollywood computer terminal. It’s written in Rust and has Travis set up so it automatically builds for Linux, Mac, Windows and the Web in one push.
There’s a bunch of weird red flags here:
I’m messing around with it now and it seems really cool, but there’s a lot they could do to make that setup process better and more secure.
Edit:
It also replaced the atom command on my system with one that launched Luna Studio
Those things you mention really seem quite bad. Hopefully, they’ll fix that soon. Especially the one with atom is baffling!
It’s important to note that this article refers to the Docker company being dead as opposed to the software. It would be quite the overstatement to say that the software is dead given how many people use Docker for deployment, CI and development.
Personally, I’m not worried. Even if the Docker company went under there is too much momentum behind the software. If the company stopped supporting the software, there’d be a hard-fork, a new Dockerhub and things would go on.
It’s important to note that this article refers to the Docker company being dead as opposed to the software.
The blog title is Docker, Inc is Dead, which is accurate. However, I think, OP did not do same while submitting on Lobsters. I requested a title change suggestion.
The blog post’s title definitely was just “Docker is Dead” when this was first submitted, I checked when I was debating suggesting a different title here earlier.
Yeah. Stuff like rubygems.org lives on sponsorship from many companies, rather than a backing company. It’ll be fine. I can’t see how Docker the company matters anymore either.
There’s no mention of what will happen with all the OSS code and docker hub. I realize this isn’t an official press release, but I wish it was still mentioned. Do the current Docker owners plan on moving all their public/oss assets into a non-profit like Mozilla and keep it community driven? Do various industries plan on funding it as an open source initiative? What does this mean for Open Container and what working group will agree on the future of docker/containers?
I would love to see something like this for “Serverless” and lambda-style architecture. I believe these are hyped but have some serious downsides.
I’ve found the criticisms levied at the Actor Model model apply to the serverless architecture. AWS / Google’s offerings link their existing services together to feed a mailbox that your function gets invoked by.
Its been converted to a single page app.
I was curious myself since I haven’t looked at it in a while. It was kind of hard to get results since searching for its issues actually brought up more pro-serverless articles than anti-serverless. My brief Googling found a Not Ready for Primetime article in February 2016, these issues in June 2016, a Challenges of Serverless article in March 2017, a few drawbacks in May 2017, and a Quora answer with a few more gripes.
I wonder: Do you not customize the rest of your system nowadays? Just out of interest, what is the rest of your setup like now?
MacOS using default terminal.app using a very minimal bash setup. I use different computers too much to make relying on customizations a habit.
Do we actually need ref?
I’ve been wondering about this myself. Can anyone provide an intuitive explanation that goes beyond the official docs?
Yes, we still need ref. In the context of patterns, ref adds a reference, while & matches a reference.
let x = Some(42);
let Some(ref y) = x; // y is now &42
// let Some(&y) = x; <- fails to compile, because x is not a reference to anything
let value = 42;
let a = Some(&value);
let Some(ref b) = a; // b is now &&42
let Some(&c) = a; // c is now 42
(note: the above code doesn’t actually compile, because rustc assumes there’s some way that x and a might possibly become None before the following lines, but that check happens in a phase after the one that gives problems for z)
Since * is the dereference operator, you might expect * in a pattern to create a reference, but * is also the sigil for raw pointers (in the same way that & is the sigil for references) so that wouldn’t work.
Why are workstations getting so seemingly rare? I sometimes get to check out offices in young startups and everyone seems to be working on laptops. I strongly prefer working at my workstation as opposed to using my laptop because of screen space, and overall hardware performance. I probably have a really skewed perspective on this as most of my close friends and colleagues share my sentiments.
So, why prefer a laptop to a proper workstation for your general work horse? I imagine most developers work either at home or at their place of work and not mostly mobile so I’m not sure why so many people prefer a laptop at all times.
I’ve noticed. I think there’s a few things, order may vary:
and, way at the end of priorities:
What amuses me is that nearly everywhere I see that, people attach widescreen monitors or even travel with portable screens (like this https://www.asus.com/us/Monitors/MB16AC/).
I engage in this myself since my work computer is a MBP and I unless I’m traveling it’s on my desk attached to a 24” LCD, a keyboard, and ethernet (though I mostly interact with it via ssh and Synergy keyboard/mouse sharing).
I’m speaking from a different perspective here, but I think this is related: If you look at students and workers at universities and colleges, most of them (at least here) spend their day on the go and have to take their computing with them - the laptop is the obvious, classical solution, not many replace it with tablets or some hybrid device. I for one spend my entire day that way, so I’m used to the limitations, but I enjoy the benefits as well. I guess it’s a thing of habit. Most drawbacks (as usual) can be worked around, too: computationally expensive tasks can be run on a server you connect to remotely, and screen estate can be used very efficiently and with great comfort once one finds a good keyboard interface to a window manager of one’s choice. The mobility aspect can’t be recreated another way, however.
We are a dying breed. I’m the only one in my office with a workstation. I built it myself (on the company’s dime) when I joined three years ago and it has served me well.
I’ve observed that the people with laptops like to bring them and work on them everywhere they go. They clearly like the mobility. While I also have a laptop, I hate using it, especially when I’m mobile. Not only do I dislike the limited screen real estate (my workstation has three 1920x1200 24” monitors), but I psychologically hate working when I’m mobile. It’s uncomfortable and hard for me to focus. I would much rather sit quietly with my thoughts, pay attention to the other humans that are with me or read something on my phone.
I suppose probably I’m talking about the high-end of the spectrum as is fair since we’re also likely talking about high-end laptops in comparisons to these workstations.
Mainstream midrange (like i5, since Sandy Bridge) hardware got good enough, and gaming took over HEDT for most high end applications.
I don’t actually like the part with the squashing. I don’t want to delete information that might make it easier to git bisect or that shows how a work item was created. I think explicit merges are good. Trivial merges (such as local history merges) should be removed with a rebase though, they don’t add any information.
[Comment removed by author]
Well you can always use a compiler’s -S output? Or do you mean something that is pretty printed like here?
It’s like someone looked deeply into my past.
One thing I disagree with: Why not YAML? I like using that for gamedev, plenty of libraries to handle the parsing for you and minimal bullshit. After all, you need something to define your entities in and chances are that you are using something where it is easier to reload a text file than dynamically recompile your code.
Lua! Reloading a text file and dynamically recompiling your code are equally easy.
json.decode('stuff.json')
loadfile('stuff.lua')
The only caveats being the Lua file can technically do whatever it wants to global scope. On the flip side, who cares this is way easier.
The only caveats being the Lua file can technically do whatever it wants to global scope.
Eh; it takes about three additional lines to sandbox it so it can’t touch global scope. <3 Lua for this kind of thing.
From experience of integrating Lua in two different projects getting the VM 100% watertight is easy to get wrong. When you get a void* from the VM it can really be any pointer you ever passed to it, so if you have two different kinds of structs you expose to Lua, you’ll have to add magic numbers to catch abuse. The default module loader uses the file system so you’ll want to replace that with your own (with little docs and example code on how to do that). More troubles I remember: Having multiple wrapper objects for the same native object. Having to maintain lists of Lua objects within the VM but hidden from usercode to loop over all X to inject data from the native side / run callbacks.
It was very nice in the end. I even had non-transitive imports (A imports B, B imports C, B can use all objects of C in its scope but only the globals originating from B are visible from A) But it took many many hours and I’m not quite sure if it saved time overall.
Takeaway from OP is probably: “Just use Unity/C#” Personally I like following up on all these interesting problems like how to integrate Lua despite it not being the best thing to do right now because my actual goal is exploration of technlogy, not making a game.
When you get a void* from the VM it can really be any pointer you ever passed to it, so if you have two different kinds of structs you expose to Lua, you’ll have to add magic numbers to catch abuse.
I just make sure all my [light]userdata have metatables. Before I cast them in C I check the metafield __name is what I gave in luaL_newmetatable. You could thwart this by changing the metatable name to some other valid name, but there isn’t really a way to do that on accident.
I even had non-transitive imports (A imports B, B imports C, B can use all objects of C in its scope but only the globals originating from B are visible from A)
Pretty sure this is standard practice. It’s described in PIL chapter 15.
PIL suggests achieving this by explicitly listing every single symbol one more time at the end. I didn’t want that.
(also, read the footnotes…probably the best bits of the article are scattered through there)
(also also, when he bandies about Baumol’s Effect, what he’s referring to is more aptly name Baumol’s Cost Disease. Note the different tone.)
(also also also, note that Graham was born in 1964–this helps place his subjective timeline a bit better)
To elaborate on my point, let’s look at the final paragraph or two and see what Graham’s getting at:
Make no mistake: Graham very much has an intended audience here.
There are several things I think Graham gets very incorrect.
First, some rhetorical flourishes he engages in are cute but waste time:
The bits about his childhood range from irrelevant to disingenuous. Who hasn’t in their teenage years felt like the system is rigged, that the world is flawed and disingenuous? That’s virtue signalling on his part for techies with a libertarian or faux-elitist viewpoint. “Mmm yes the common man, unaware of how fake everything is </tips fedora.>”. Playing to a crowd.
The age of recollection for him in this is 13–so, 1977. His comments about cars and tailfins? Totally wrong if you look at cars from that era you don’t see a lot of spurious chrome. He probably was thinking more of cars from the 50s. 70s cars were, if anything, mainly US companies figuring out how the hell to deal with the energy crisis.
He also makes a slick little backhand against unions in the pointing out their inability to have antitrust actions taken against them–which he then slyly references in a wink to services like Uber and other sharing-economy by suggesting that maybe they count, sorta, as unions.
Second, he outright ignores several historical events that had massive effects on the economy:
There is no mention of the Vietnam War. This was huuuuuge, and had a notable effect in both college enrollment and in civil rights. If Graham’s argument is about increasing wealth inequality being inevitable and a result of a return to the status quo, he should probably at least mention a conflict that helped fund education, incentivize education, and which massively strengthened the political (and hence economic power) of minorities.
There is no mention of the Cold War, especially of the Space Race. The amount of money flowing around for small job shops doing contracting work on military equipment was nontrivial. The entire development of APRA and the increasing access to government contracts for contractors and subcontractors is a direct counterpoint to his “only a few national companies” narrative. The entire rise of standarized parts from this political climate (MIL-STDs) made a lot of market opportunities possible that weren’t before. Silicon Valley was built on little companies, not national companies, doing semiconductor work and specializaed IC design.
There is no mention of standardized shipping containers, and the itenerant rise in global trade. It’s this exact trade that made possible both the outsourcing he mentions and also the disruption of existing national companies–and most importantly, the leveling of the playing field and reduction of variance in productivity.
There is no mention of the 70s energy crisis and resulting stagflation and high unemployment. This recession had a nontrivial effect on both the starting of new businesses and the continued existence of older national corporations in the face of newcomers.
There are a lot of other things that I too am leaving out, but hopefully my point is clear: there were waaay too many factors to support a simple narrative of “consolidation -> ww2 -> ogliopoly -> 2000s fragmentation” that Graham is trying to sell us.
Third, and perhaps most annoying to me, is that Graham discounts both small businesses and rising-all-tides phoenomena.
Apparently, all of the little contractors and job shops and mom/pop stores and restaurants didn’t exist in the 20th century–Graham kinda handwaves this by claiming that people only were told to be “executives” and not to found their own companies. I think that’s patently false, and if we were to look at the number of businesses started every year in the 20th century (excluding shell corporations, presumably) we’d see a large, long-tail of local businesses. Hell, look at the backs of old issues of Popular Mechanics or Popular Electronics or even old newspapers to see that there was a vibrant ecosystem of mail-order businesses starting up and flourishing and floundering and dieing off. People clearly were still entrepreneurs.
(And honestly? That’s my biggest problem with Graham, both here and in general. He has to push an agenda that entrepreneurship is special (at least the modern type is!), that it has never existed before (by rewriting history or selecitvely recalling it as he does here), and that only the businesses that fit the YC-style model (rapid growth and slash and burn) are real comapnies worth considering.)
Fourth, Graham does a really poor job describing wealth–which is unfortunate, because it’s kinda at the core of his ramble here. Without a clear definition, his sidelong equating of wealth creation with permanent increasing inequality can’t be considered in any rational way. We can consider wealth under many definitions–and I’ll give an incomplete listing here:
Each of the different types of wealth there are different in how they are deployed. Most people probably think in terms of currency, a slightly fewer in terms of possession of goods or perhaps tools. Fewer still figure in terms of capital required to fabricate or make those other forms of wealth. Perhaps fewest will think in terms of jobs created (for wealth redistribution).
And that last category? The “wealth” of being the most sought-after investment vehicle. Notice that out of all of the rest, this form is both the most variable and also the most rewarding (economically)–and yet, it is also the only one that can’t be shared without destroying its value.
When Graham writes this essay, he uses “wealth creators” and signals everything but that last category–when the core of his business success post-Viaweb is primarily kick-the-can startup reselling based on hype.
Don’t mistake him for supporting wealth creation in any traditional sense, because that isn’t what he’s trying to sell you on. He’s trying to say, in a hamfisted shy-libertarian way, that we should ignore economic inequality because that’s the way it’s always been, that’s the way it’s supposed to be, and that we’ll endanger new wealth if we try to fight it–while using “wealth creation” in a way that is tortured enough to be logically consistent to him while tricking people into ignoring what he’s about.
One last thing.
It should be obvious at this point why it’s funny that he should mention Baumol’s Cost Disaster (or to Graham, Baumol’s Effect):
He’s saying that the market inefficiency created by firms attempting to retain labor (e.g., raising wages to prevent firm-hopping) is actually the creation of wealth…because to somebody who’s business is literally buying and selling talent and market opporunity, it is.
For the rest of us, for small businesses trying to make payroll or employees trying to avoid getting screwed over on salary because they aren’t the anointed assets, it’s pretty obvious that it’s just a distortion that is hurting our industry.
tl,dr: don’t learn about economic history from Paul Graham, who is neither objective nor thorough.
Great critical write-up. Do you think that perhaps Paul Graham would be interested in replying to your counter? I guess not, but it certainly would be interesting to see read his reaction.
The funny thing about that is he references DHH as a good example in this essay:
http://paulgraham.com/vcsqueeze.html
That and a Twitter debate were the only two results I got out of Googling both names. Funny stuff to see him exemplified in an essay in beginning then people blocked just for mentioning him.
@paulg blocked me too. No idea why, as it was long before his goons started their little war that involved me. Consider it an honor.
There is no mention of the Vietnam War.
I agree with you, but to be absolutely fair, he does mention it here (though admittedly in passing):
In a way mid-century TV culture was good. The view it gave of the world was like you’d find in a children’s book, and it probably had something of the effect that (parents hope) children’s books have in making people behave better. But, like children’s books, TV was also misleading. Dangerously misleading, for adults. In his autobiography, Robert MacNeil talks of seeing gruesome images that had just come in from Vietnam and thinking, we can’t show these to families while they’re having dinner.
Don’t mistake him for supporting wealth creation in any traditional sense, because that isn’t what he’s trying to sell you on. He’s trying to say, in a hamfisted shy-libertarian way, that we should ignore economic inequality because that’s the way it’s always been, that’s the way it’s supposed to be, and that we’ll endanger new wealth if we try to fight it–while using “wealth creation” in a way that is tortured enough to be logically consistent to him while tricking people into ignoring what he’s about.
Paul Graham, in 2004, came out swinging as a Valley cheerleader when everyone thought that “startups” were this silly ‘90s thing that would never come back, like disco. He was right, insofar as he pointed out (during post-tech-bubble Bush-era malaise) that not everything about the late 1990s was bullshit. It’s the reason why he has the rep that enabled him to found Y Combinator. If it weren’t for that, he’d be a nobody just like the rest of us. However, I have to give him credit for (a) being right about something and (b) standing up for it.
Paul Graham is a Silicon Valley Exceptionalist. He believes that the robber barons of the past were One Type Of People, and that people who made money in computer businesses after 1995 are A Different Type Of People: it used to be that rich people were clever winners of zero-sum games, but This Time It’s Different, and these new rich people (because they’re poorly dressed, therefore they must be sincere) are The Good Kind and you should all worship those Wealth Creators (“Job Creators” didn’t test well; too blatantly patronizing and Republican).
In other words, I don’t think that he was maliciously pushing for more economic inequality so much as he really believed that the Silicon Valley elite was a fundamentally different (and morally better) kind of economic elite. Of course, one could argue that this was self-serving given that he was boosting the elite he managed (through dumb luck) to become a part of, while trashing all the other elites (historical and extant) that he wasn’t a part of…
I give them credit for creating methods that crank out good startups like Ford’s processes cranked out Model T’s. Past that, they’re only a little better than prior elites while still not actually elites yet. They dont understand what it means to be one. The elites write laws and get tax dollars as contracts. Microsoft, Oracle, and eBay founders are already there with Google, Amazon, and Paypal going in that direction. Most just think wealth and VC = elite. Nope.
Putting any finishing touches on my rust cloc (count lines of code) alternative (https://github.com/cgag/loc) before I try posting it around HN/reddit tomorrow. Readme improvments, removing calls to unwrap(), things like that.
Someone else published their own rust implementation of the same thing before I finished mine (I can’t believe two people were working on that) 0, but I figured I should keep going, and mine turned out a good bit faster. I’m currently trying to get rust-everywhere 1 to build artifacts for platforms other than linux, but I’m running into errors getting it to actually push the artifacts to github.
edit: those numbered links were supposed to be footnotes but they got turned into inlined links :\
Coming from a Cherry MX board with Cherry Brown switches and now using a Ducky Shine 5 with Cherry Blue switches. It’s very pretty. :)
Still looking for a job. If anyone has any need of a Ruby or Python developer from now until October, especially in the Berlin area (remote also works), please let me know. Brief summary — full CV/resume
Besides that I’m making the final arrangements for a group I’ve organized to walk in Berlin’s LGBT Pride parade on Saturday.
If you’re ok with doing fairly non-exciting work with an ERP system in Python then write me at svenstaro@gmail.com.