Some more resources for those into this:
What a strange article. Author lists every common benefit of REST and just disagrees with each one while providing no new information. “No it isn’t… no it doesn’t… that’s not true… nope, not that either.”
If you want to design an RPC API and stick all your semantics into POST bodies talking to a single “/do” endpoint, fine. Personally, I much prefer to interact with a well-designed REST API like Github’s with its RFC-compliant URL templates, header-driven paging, cache semantics, and more.
Have you used Github’s GraphQL API yet? Personally, I’ve found using it to be a breath of fresh air in comparison to their REST API, which was easily one of the best REST APIs I’ve used and leagues ahead of most implementations. Give it a go, the RPC-like mutations are particularly wonderful.
I can’t publish actual results for obvious reasons, but it does find a few servers in a short time (~15-30 minutes maybe, I wasn’t paying attention to the terminal)
Cool, that’s all I meant really. Can you say how many IPs you had to hit before finding those few? Or the average IPs per second? Thanks.
Scanning the internet randomly in that way is not gonna lead to a lot of results, at least not in any reasonable time frame. If you instead look at sites that crawl the internet for a living, you get 17.000 results. Not all are actually Redis nodes and not all Redis nodes are completely open.
Attack vectors on Redis to compromise the whole system are known for quite some time, and Redis now has better defaults and a protected-mode by default. But people tend to not update it. We still reguarly have users coming into the IRC channel asking for help with cleaned/exploited Redis node.
I keep reminding people to not open up each and every service to the whole wide internet.
Yeah indeed, that’s exactly why I asked for the results - I’m curious to see if they found a single one with this technique.
Is this truly what the world has come to?
Productivoty gains by writing Dockerfiles, config and shell scripts, when any sane framework and/or standard library gives you the tools to mock things.
Like why would a local developer want redis?
Leave the hall.
I think you should use a virtual machine for this, though. For instance, in the project I’m working on, I’m on Linux and the other two developers are on Macs so my understanding is that Docker won’t help.
My understanding is that software packaged by Docker still uses the host operating system’s libraries/etc. under the hood. Is this correct? If so, then it doesn’t seem like a solution as we’d be still running on different operating systems.
Not at all. You can run a different district. Everything is duplicated and you only have the exact versions specified.
I’m sorry, I mentioned libraries but this is not what I really had in mind. (I also assume that autocorrect changed “distro” to “district” in your reply). I’m concerned about platform-specific issues like path lengths, characters allowed in file names, kernel APIs, low-level system stuff (e.g. OOM killer).
If the problem statement is: developers (and production) use different environments then the solution would be to make them use the same environment. I use VMs to achieve that and my impression is that virtualized environments are closer to “identical” than containers.
characters allowed in file names
Funny you should mention that. I ran into exactly this sort of a problem last week, where Docker, (which uses the host’s FS) couldn’t differentiate between files that differ only by case on MacOS’s case-insensitive-by-default filesystem. As a result, the state of my system running on Docker was significantly different from that on an Ubuntu machine, even with everything else (libraries, etc.) being the exact same.
That being said, I still find Docker very useful for quickly spinning up an instance of something on my Mac. But in the future, I’ll think more carefully about where the abstraction ends and the host starts mattering.
Simply put: if environments don’t match, mistakes get made.
I’ve seen it time and time again - to give some examples: local dev using a mocked out in memory cache, prod using memcached/redis; or local dev using SQLite and prod using Postgres. Mistakes get made due to overlooking the differences between implementations.
No mock is as perfect as the real thing and it’s much nicer finding issues in development than at deploy time.
local dev using SQLite and prod using Postgres
I was in a similar situation (but with MySQL instead of Postgres). We were getting failure in production that we couldn’t reproduce in development. It seemed as if the program was trying to insert data into a column of the wrong type. This is how I learnt that SQLite does dynamic typing which is code for “we don’t care about column types”.
Yep, it’s the same for testing. There are plenty of companies out there use in-memory SQLite databases with their test suite “for speed”, without realising they’re basically throwing type safety out of the window and making their tests effectively useless.
redis is not only a good replacement of memcached, it is a great pub/sub server, a leaderboard and has a geoip api. I need redis in development because I use redis to implement features that use the apis that redis provide.
On the other side, even if am not a huge fan of docker, docker is useful in many cases. For example I use Void Linux and OpenBSD as my desktop operating systems. In one of our client we store data in riak. riak is not available in Void Linux, neither in OpenBSD. Thanks to docker I can easily run riak in docker.
Heroku is fantastic. In a previous startup, we used to be hosted in Heroku but we moved to Rackspace because of the costs. The app ended up being about 10% faster and the hosting costs a little lower, however we rapidly found ourselves in a position where we basically needed a full time devops person. That decision was too premature and it slowed us down.
Right now, I work for another YC startup and we’ve been hosted in Heroku for over 2 years now. The possibility to focus in development instead of devops is truly liberating. Our business isn’t maintaining updated linux boxes, or dealing with firewalls, but building features for our customers.
They say Heroku is too expensive only if you don’t value your own time. Amen to that.
we basically needed a full time devops person.
This is exactly it, and why they can charge so much - I very much doubt a Heroku bill for a typical small company would ever grow larger than the annual salary of a decent ops engineer - and if it does, you know full well it’s time to move away! It would however be nice to have a little more comparable competition in the PaaS space. Heroku is simply king at the moment.
Tag suggestion posts are rare. There’s been a bunch in the past day or two, but I can’t imagine the pace continuing.
On IRC, spinda rued the fact that there was no ‘party pooper’ downvote option for your comment. I however appreciate your answering this post straightforwardly.
Is that a lobste.rs-specific IRC channel that I’m unaware of, or a general comment that it happened on IRC?
EDIT: Well I never, completely missed chat in the bottom right.
[Comment removed by author]
As far as I’m concerned GenericForeignKey should have been deprecated long ago; I’ve dealt with so many projects that have leant on them early on in their lifetime only to realise it was a terrible idea. They then have to struggle to migrate large swathes of data without the help of actual constraints keeping their sanity. Nightmare fuel.
Lovely little product that I would pay for, the trackreddit interface is abysmal.
Tech wise though - is the reason you don’t support searching the entirety of Reddit a scaling issue? I ask because the global comment firehose seems to be freely available, so I’m wondering if there is a reason you don’t use that over selecting specific subreddits. Thanks!
Thanks for the feedback!
I built F5Bot in only a couple hours, so I didn’t do a lot of research or planning. It appears that the ‘comment firehose’ you posted doesn’t even go back one second in time. Is that right? I’m not sure how I could realistically use that without hitting Reddit several times a second, and even then I would miss a lot when Reddit goes down (which is often, I now know). Also, when I posted to Hacker News, a couple commenters mentioned Reddit API limits.
So I just took the lazy/easy way by monitoring only the few subreddits that I actually cared about anyway. In this case, I can check back every couple minutes, and if sometime goes wrong I can check back even later without missing anything.
Would this be substantially more useful to you if it pulled all of Reddit, instead of the current subset?
It’s nice of you to say I could charge money for this. I think I’ll leave the current feature-set up for free. Realistically, it only took me a couple hours to build, so I don’t think it makes sense to monetize that. In fact, I was thinking about going the opposite way and open-sourcing it. I might add premium features in the future, but I’m not sure yet.
Hey, we’ve documented some API rules here.
You should use a unique user-agent as mentioned in the previous link. Also using OAuth will increase the rate limit to 1 qps. If you’re using PRAW, it’ll automatically handle rate limiting for you. If you have any more questions, feel free to reply here, post on r/redditdev, or PM me.
Would this be substantially more useful to you if it pulled all of Reddit, instead of the current subset?
Yes. Social Media monitoring is quite a large market: Mention, Hootsuite, etc but they all come with complex UI, reporting and cost quite a bit. There are a few in the cheaper “just notify me” space but the ones I have used have been fairly awful UI wise.
As to hitting reddit every second, I had made the assumption you’d be doing that anyway - but yes, it would be worth reading around to find out what they deem acceptable.
Just checked out trackreddit. You weren’t kidding about the interface! I guess maybe they were thinking to make it more powerful, but man, they did not optimize for the common use case at all.
It actually works, but the email notifications send you to their mobile feed which has an ever worse UI than the main app. Also when it came to cancel, you can’t within the app and they ignored my request for 2 months leaving me with no choice but to raise a PayPal complaint.
Felt like a hobby project gone wrong.
I felt this needed a new topic as it is time sensitive and would get missed in the comments of the existing thread but mods, please do as you see fit.
Is this also your first time seeing something like this? They’re even doing a live Q&A.
Seems quite useful for teams collaborating on Github although I’m not touching it with a 6 foot stick until someone knowledgable in security says this is a good idea.
The implementation is fairly small; it effectively adds the Github user’s public keys to the box’s authorized_keys file on startup and removes the keys on program termination. It relies on the fact that all Github users public keys are publically available at github.com/<username>.keys (e.g mine).
To use it you’d have to trust the individual, Github, your connection with Github and also that their key is theirs and theirs only.
That was my first thought, then I realised it’s no more dangerous than the rest of the unsigned binaries I install off the internet
From the comments:
Part of what we’re trying to do is get the benefits of AMP without publishers having to “give away all of the control” of their content by putting it under Google’s domain. AMP, if it only works on Google’s infrastructure, poses a challenge to an open Internet. If we can help ensure that it’s truly an open standard then it can help cleanup a lot of the mess that is the current state of the mobile web. We have a lot more to come on this front. Stay tuned.
I get the need, and I think it’s a noble effort but I still feel like AMP is slowly breaking the link-ability of the web, URL by URL. So we now have 2 silos instead of one, great, but we used to have many orders of magnitude more than that.
Oh boy. “Phoenix on Elixir”. As much as I love Elixir/Erlang/BEAM, it seems inevitable for the language and ecosystem to get stampeded by hyped-up Rails/web developers who think of both as the same thing. Oh well, more users means more chances of getting an Elixir job.
The article never defined what a “modern app” is and how Phoenix makes it easier compared to Rails (which has web sockets). That said, Phoenix has the BEAM behind it and is not a majestic monolith—with umbrella applications, it can just act as your web server and have little to no business logic, so it can make large web applications easier to manage.
This is valid criticism. My main focus was trying to find the parts in Rails that can be easily “translated” into Phoenix and explain those parts. I didn’t write much about BEAM and umbrella applications because there really isn’t an equivalent in Rails.
Sorry if I was too hard on you—the rest of the article is excellent. It just happens that those two things really bug me :-) I definitely think umbrella applications deserve a mention, it’s a major advantage to Rails (or many other languages).
No worries. I enjoy honest criticism, it helps me improve. This is my first blog post of this size, so I have plenty to learn still.
For people that used both Elixir and Django: how do the Umbrella Projects relate to what django calls apps?
Not really at all. I’m trying to think of any similarities but I’m currently failing; they’re closer to a package of packages. I’ll go through some reasoning:
A Django app is a framework construct, Elixir Umbrella’s are more core to the language itself. Every build of Elixir ships with Mix, a build tool which is capable of creating new Elixir projects. An Umbrella project is simply a root level Elixir project that contains one or more Elixir projects “under its wings”. Each Elixir project keeps it’s own dependencies, version, docs, start-up list, build, test suite, etc. Basically it lets you have a de-coupled per-app isolated layout while keeping the advantages that a single repo can give. Each app within the umbrella is individually deployable.
In Django, it’s quite awkward to create a 3rd-party package that runs alongside your actual project while you’re developing it. Sure it’s possible, but with Python Path and module importing issues - it tends not to happen and you end up syncing with some git remote. This problem is solved with Umbrella projects because you simply create a new Elixir project within your umbrella and list the new project as an “in umbrella” dependency to your existing project; this means that when you’re finished with development and ready to open source on Hex - all you have to do is publish it and change the dependency of your original application to point to Hex instead. And if you never finish it? It can just stay there running as it’s own self-contained app. It reduces the friction completely.
A Django app tends to be tightly related to a set of models and specific Django patterns. An Umbrella project is not really constrained by anything other than the standard layout of an Elixir project - which is defined by the Mix tool in every project anyway. There is no linking it with a set of data and it doesn’t take on any special extras when used with say Phoenix - it’s just a Mix project at the end of day, and any Elixir project can be one.
To end with an example: a common pattern I take when using Phoenix is to start with an Elixir Umbrella app, and then immediately create 2 sub apps: one that contains my business logic and data stores, and the other that deals with interfacing with the web with Phoenix as a dependency. Having the separate apps helps me enforce the separation by providing a clear isolation line, and also means that I’m not glued to Phoenix. Also, while they do have separate things, if I run commands from the root Umbrella, they tend to get run on both - the integration is very nice there.
Edit: I deliberately stayed away from OTP because I think the acronym can be off putting to newcomers but for simplicities sake, 1 Elixir project == 1 OTP app.
An “umbrella project” is just multiple OTP applications in one project. They are full OTP applications, so more like a generic python package.
I haven’t used Rails Engines in a while so I completely forgot about them. I know they can be used for similar purposes but if I recall correctly they almost become a part of the Rails monolith instead of being treated as a separate application as in Elixir. I could be wrong though.
I just noticed that the title kind of suggests that the name of the framework is “Phoenix on Elixir”, which was not the intention. Sorry about that.
I just started a project using webpack 1.X, so I had a look around to find out if they are dropping development for the 1.X versions and just focusing on 2.X but I couldn’t see anything about this.
Anyone know what the roadmap/plan is? Only thing I could find was this but it doesn’t give much information regarding previous versions.
I don’t know the answer to your actual question but I have tried migrating a v1 project to Webpack 2 the other week but the ecosystem surrounding it just wasn’t there yet - lots of the loaders and plugins I was using had odd quirks under Webpack 2 or simply didn’t work at all with the new loader configuration format. It’ll take time for the system surrounding Webpack to play catch up, hopefully the official release will step that up a gear.
The attacks don’t target all MongoDB databases, but only those left accessible via the Internet and without a password on the administrator account.
So this isn’t a Mongo-specific problem, this is a deployment issue.
It is very hard to believe that after this highly-mediatized rash of ransom attacks any database administrator won’t double-check to see if his MongoDB server is available online and if the admin account doesn’t use a strong password.
Ditto to this.
In a couple of weeks, it is reasonable to expect that all MongoDB servers exposed to the Internet will lose their data and have their content replaced with a ransom demand.
These “hackers” aren’t doing anything particularly intelligent, only targeting unsecured Mongo instances, so I don’t see where this statement is coming from. If everyone who publicly exposes Mongo to the Internet set the admin password, it sounds like the problem would be solved.
This is focusing on Mongo mostly because “no password, bind all” is the default setting for any mongo deployment as it assumes you have a working firewall.
I have a web service (forum) that uses a “no password” mongo, but I only make it bind to a loopback address which I can probe from the outside using SSH tunnels if needed (and if anything happens, I still have daily backups). Even if I didn’t modify the settings to let it bind to that port, my firewall would have stopped the server from being publicly accessible.
This is a deployment issue.
I thought this, so I read their Python Getting Started Guide - not a single mention of authentication.
You either have great documentation with big warning signs or safe secure defaults. Mongo currently has neither.
Personally, I would always advocate safe secure defaults, not everything can be solved with education.
I agree with you, I was mostly commenting on the sensationalism of the prose. This sort of “hack” is not all that advanced, it comes from misconfiguration. The default configuration should of course be more secure by default, there have been a number of articles written to this effect, cf. this one from Shodan ~1 year ago warning about this very issue with Mongo, and how easy it is to exfil/destroy publicly accessible instances.
Absolutely, it’s the equivalent of a port scan. Some “hack”.
It is very hard to believe that after this highly-mediatized rash of ransom attacks any database administrator won’t double-check
I do take issue with this though as it makes some assumptions which from experience have never been true: a) that any given deployment will have a database administrator & 2) that said database administrator will be competent.
To expect developers that have picked a database based on how easy it is to dump JSON in to have any clue about secure database deployment is asking way too much. And the only way to solve that is, as you say, sane defaults.
To expect developers that have picked a database based on how easy it is to dump JSON in to have any clue about secure database deployment is asking way too much.
maybe i’m missing sarcasm here. imho, one of the first things one has to do when using new software which is reachable from the network is to check how access can be restricted. regardless if developer or admin. if you use a new power tool which has the capability to maim yourself, you are also expected to take the common precautions.
No sarcasm, it was aimed at new developers though. Expecting new developers to know the world == trouble. To be clear: the “imho” line is your (good) view, which you’ve probably garnered from years of experience and mistakes: a new developer would not have that world view yet.
It causes very little extra pain to have some form of authentication by default. Then the user of said software has to learn about authentication from the get go, and expects that they have to handle it post deployment. It’s about creating the right intentions.
Many of my younger colleagues simply don’t know how packets get from point A to point B, as well. So the idea that it could be insecure is surprising or something they just don’t consider.
i’d like sane defaults for authentication too. it just feels wrong that the expectations for the knowledge of developers are that low :/
I wouldn’t say it’s a deployment problem per se. I believe it’s more of a consequence of the industry valuing products that are “easy” above all else. Defense In Depth is a pretty standard security perspective and popular solutions such as Cassandra, Riak, MongoDB, and redis all prioritize making the default configuration very simple at the cost of security. But that’s what the people want.
I’m not saying it’s ok to open your database up to the world but just that this is expected if you look at the incentives users are giving authors of databases these days.
So this isn’t a Mongo-specific problem, this is a deployment issue.
It is Mongo specific in that the default settings of MongoDB are brain dead and stupid with respect to security.
It is very hard to believe that after this highly-mediatized rash of ransom attacks any database administrator won’t double-check to see if his MongoDB server is available online and if the admin account doesn’t use a strong password.
Ditto to this.
Again, this is somewhat Mongo specific because (a big IME here) MongoDB administrators are not usually at the same level as traditional DBAs. That’s why we’re seeing thousands of MongoDB instances compromised, and no mention of PostgreSQL, Oracle, MySQL, DB2, etc. Sure, this attack is possible with those databases, but they have more sane defaults, and their admins (again, IME) have a better idea of what they’re doing, so it’s not so much an issue there. Yes, now and then you’ll see some idiot leave his Oracle DB exposed, but you don’t see thousands and thousands of Oracle DBs exposed all at once for the same reason.
These “hackers” aren’t doing anything particularly intelligent, only targeting unsecured Mongo instances, so I don’t see where this statement is coming from. If everyone who publicly exposes Mongo to the Internet set the admin password, it sounds like the problem would be solved.
From the context it’s clear that “exposed to the Internet” in that statement is referring specifically to the MongoDB instances using the default setting of no password and no firewall.
I agree with you that it’s really low hanging fruit as far as “hacking” goes. The MongoDB community should be really embarrassed about this.
It seems it’s easy to hyper-inflate the impact or serious skill of a particular culprit behind acts like this. E.g., the Podesta phishing scandal & related events had the momentum of a US Presidential election behind their news cycle, even so it was highly over excitable in its attempt to paint the hacker a Mr. Robot Dark Army type person.
The problem isn’t “oh shit hackers are dangerous” it’s “people should learn fundamental cybersecurity concepts before deploying anything with even remotely identifiable or important information.”
Count me as a cynic, but if you don’t put a password on your internet-connected database administration account… then you can eat a plate of crow and stfu. Wake up tomorrow and start using better security practices. This is natural selection. We must expect and prepare for the worst, not just hope for the best.
My favorite part about Elixir is that it doesn’t try to reinvent the wheel. It’s not an untested, trendy new paradigm or tool. It’s based on decades of Erlang/BEAM/OTP experience and Ruby/Rails/last 10~ years of programming ergonomics. The only things that are added are those that actually add to the experience.
An example: A large part of the Erlang standard library is not translated in Elixir. Instead, you call Erlang methods with a seamless interop. José & co. understand that there’s no need to reinvent the wheel and no benefit in another abstraction. I like that.
My least favorite part about Elixir is that it has become a “trendy” language among the web crowd. While the community is overall great, there’s always a loud minority of “beginner experts” both claiming it’s the best thing ever and deriding those for using other tools. I’ve seen a lot of random, unwarranted Rails bashing and Elixir shilling (never from a team member or community leader though!).
As a long time Erlang user, one of the things that’s subtly nice about Elixir is that they fix some of Erlang’s warts. It’s not just the syntax. The string handling is better, for instance.
My least favorite part about Elixir is that it has become a “trendy” language among the web crowd.
Yup agreed, Rails bashing will not get the community anywhere. But becoming popular is not solely a negative thing - for a start, more jobs working with the language have started appearing at long last. It cannot be a successful language without a market for it and it’s very well placed now to take a large percentage of that web market.
That’s a fair point. I’m not opposed to Elixir “marketing”, I just want it to be respectful of other developers and based on facts, not hype.
Absolutely, any community that goes a step further and actively shuns that behaviour is one I want to be a part of.
The conclusion here is actually the crux of Kathy Sierra’s brilliant Badass - Making Users Awesome book and as Dave mentioned her as a precursor to his post I expect that they actually hold the same views, said slightly differently.
i.e The focus should go on making the user a better person/programmer.
As an aside, it’s so nice to be reading properly chained blog posts than trying to read conversational tweetstorms.
As an aside, it’s so nice to be reading properly chained blog posts than trying to read conversational tweetstorms.
I’m kinda sick of the 4 slots the same conversation about Rust is taking up on the front page, especially when none of the articles is really that detailed or informative (barring the original post). Of course, everybody who’s ever heard of Rust is upvoting them (because of course they are, because marketing), but the lack of useful information is kinda a bummer.
I feel long-form conversation using blog posts is more beneficial for the community in both the short and long term than tweetstorms. That is all I meant, Lobste.rs interface did not come into my reasoning - filling up 4 slots is obviously not ideal, I wonder if that could be fixed with interface?
Does it really need to be fixed?
If there were a pattern of Rust occupying the top 4 slots, then OK, I could understand the complaints. It happened once. Whoop. Dee. Do.
You have a gas line to your clothes dryer?!?
That was great, it really turned out quite steampunk at the end.
It’s a pity we live in such a mass produced, cheap plastic world where price is put above even basic functionality. It’s like they said, this is the bare minimum which will be fine for 30% of cats, if we add metal, extra weights, or guards we’ll break the budget.
What even is this question about? :)
Yeah a lot of dryers use a gas flame to provide the high heat needed for drying. That’s why it’s important to clean the lint trap, otherwise the pile of lint can catch fire. It’s freaky, but it is what it is.
Original commenter is probably not in America - I’m in the UK and I’ve never seen that let alone heard of it.
All ours are electric.
I think it was this line:
She might retaliate by chewing through the gas line on the dryer or something.
I was wondering too!
Is it possible to read this without a LinkedIn account?
(No, I have no LinkedIn account to show you.)
gist of the post by Ezekiel Buchheit.
For others having the redirect issue.
2bluesc on HN made a pricing/“spec” provider comparison gist.
… and someone else expanded it to include DigitalOcean, Vultr, Linode, OVH, and Scaleway:
https://gist.github.com/justjanne/205cc548148829078d4bf2fd394f50ae
I have been using this forever in SASS but I have heard criticisms about it. Are there any reasons this feature would be bad?
The largest user-facing downside of nesting is over-nesting which is classed under “feature misuse”; it is best avoided by sticking to a BEM-like selector pattern and restricting yourself to sensible levels of nesting (1 or 2 levels) in most cases.
This lets you avoid writing overly-specific CSS, which in turn means you avoid unnecessary CSS bloat. The bloat comes because of a few reasons: overly-specific CSS is hard to reuse, which leads to repetition; and, to a lesser extent, the lengths of selectors can get extremely long and cause a lot of unnecessary work for the browser – in extra time to download, extra parse time, and more complicated application.
In terms of browser’s implementation of this, the most worrying seems to be the possibility of combinatorial explosion which is already being discussed.
The reason that this being tested as an issue is largely based on the amount of tags generated at the time of post-processing.
With native nesting, I’d be surprised if this will be as popular as it is today.
I may be biased, though. I think that BEM is bad in that it seems like someone wanted to make things flat because they didn’t understand how to make specificity work for them instead of against them.
Well, the nesting present in SASS is more ambiguous than this, so I believe there are cases that other people dislike that ambiguity, but with native nesting we’ll have a chance for it to be designed in a much better way.
Common criticisms of selector nesting are:
Having a big change like this to CSS is going to break/change a lot of tooling that wasn’t originally designed to CSS’s syntax, but was instead built with an idea of what CSS was based on what it used to look like.
I’m not a personal fan of preprocessors or nesting, but having it come to CSS natively is a big change, and will totally change the way most people will write CSS. Exciting times!