Note: you may get the impression this is proxy’s fault, but it isn’t. Host: registry.npmjs.org:443 is, although unusual, a valid HTTP request. (Host: registry.npmjs.org is usual.) It’s NPM registry that is in violation of HTTP standard here.
Huh, interesting, I definitely would have assumed that it would only include the “host”, but you’re totally right:
Host = "Host" ":" host [ ":" port ] ; Section 3.2.2
Host = "Host" ":" host [ ":" port ] ; Section 3.2.2
That is kinda weird/interesting. If browsers did report the port, I could see mismatched Host header vs real port when behind a load balancer (typically handled by the load balancer adding X-Forwarded-For headers). But most browsers probably don’t do this, unless it’s a totally non-standard port?
You can also mismatch TLS Server Name Indication and Host header. For example, nginx treats TLS and HTTP separately so it does not care if it mismatches.
I’m not fan of full JS websites, when most pages can be cached server-side and will be (about) the same on each reload.
I’m using some VPS, not (yet) a server at home.
I use Debian or Arch Linux on these.
Just use singular table names.
You can also consider rate limits. You probably should.
Also, forms must have CSRF tokens.
I like the checkbox hack for showing a dark theme (even when JS is disabled).
Thank you. I use JS only to save the selected mode for the whole session. Without JS it goes back to the default state on page reload.
Don’t forget to redirect back to the old locations if you ever migrate back to the old view.
And… you did forget that!
This article promotes proprietary software, it does not really say anything else.
Finish a system which makes RSS feeds from release versions, with Wikipedia as source. This gives me a single place to keep track of things I need to keep track of for work, without getting too much noise. Works pretty well for things as git, Kibana, or Ruby. Not so much for very small projects which don’t have a Wikipedia page :)
Mostly done, needs a few things still: https://verssion.one and/or my github.
After that the next side project to keep me distracted from real work…
I plan to do something like this (track software updates), but I would directly scrap software websites or CVS web interfaces (e.g. fetch git tags on cgit) instead of relying on Wikipedia, to get instant updates. I wonder how package maintainers (or Wikipedia contributors) are dealing with this because few pieces of software have an RSS feed for updates.
By the way, to have a user-friendly way to privately scrap websites (elements of pages, text files, etc…) automatically for all kind of updates would be awesome. I know someone that uses a piece of proprietary software that is not so nice for that matter.
I’ve been mentally designing a ‘good’ way to do this for some time.
It’s very hard if you care about edge cases, but not so bad if you don’t. I’ll do a show-post here if I ever get it going nicely.
Going to the sites directly makes it really hard to filter out betas and other unstable releases. Even the github “releases” page don’t help much here; it also happy lists the -pre1 releases. Regexp-ing version numbers is also less easy than it looks (192.168.1.1 looks like a perfectly fine number), so you would need to twiddle for each and every software project, definiing a source, what a version looks like, and how unstable versions work, and keep them up to date. On top of that there is Vim, which releases a new stable(!) version every 4 hours or so. :)
I had a look at all that, and at what is available on wikipedia. I choose to use wikipedia, since it’s Good Enough IMHO. And as a bonus it helps keep wikipedia up to date!
You might have luck with scraping gem repositories for version numbers. Could be a bit more straightforward too.
You can also consider to use a static website generator, such as Jekyll. The idea is to do not expose any dynamic content such as an administration panel that could lead to a potential vulnerability.
You may not have the time to update a CMS in the future, and a static website won’t expose a vulnerability. Also you are free to design your website according to your needs, unlike with a CMS.
I’m on my own personal instance, @Exagone313@share.elouworld.org. I’m using Twidere on Android, available on F-Droid.
Are you aware of a “relational” database to store entities that could be “incomplete” (e.g. you get only a part from an API call then every parts from a second API call) or with optional fields (like you can do with documents), but still having relations between entities (foreign keys and 1..1, 1..n, n..n relations), and be able to run on a distributed system? I was thinking of storing optional fields somewhere else, like in PostgreSQL’s array or in filesystem, but obviously a lot of keys and string values are repeated and it could work with that).
What would be very useful would be to detect entities in child nodes of a json/bson document (potentially recursively) and create new entries for them (hence the incomplete entities).
If I understand you correctly, you can do this in any SQL database. Just select only the fields you actually want on the first call. As far as allowing relationships to be optional, that’s what nullable foreign keys are.
Running on a distributed system is harder, especially if your workload is online transaction processing. But that would be challenging regardless of the data store you choose. If you really need something that high-end, you can always pay for Spanner…
But the odds are that you don’t need that. When people frame their requirements as “it needs to be a distributed system”, I take this to mean that they don’t know what their requirements actually are. Do they need redundant copies to prevent data loss? Do they need high availability? Those are mutually exclusive (in practice, with existing solutions, other than Spanner; I’m not trying to argue about the CAP theorem), so… maybe they haven’t thought it through.
I think I’ll make an article later to describe exactly what I would like.
Sure, go for it. The response is probably going to be some form of, you can meet the basic needs you’re thinking of but you might have to change the schema a little to make it happen. SQL definitely isn’t like JSON-based storage systems where you can always fit the existing data into it without rearranging it at all.
This is a known issue, see https://github.com/lobsters/lobsters-ansible/issues/2
Question is why it’s taking this long to just generate a new cert with the extra SAN…
No one is paid to work on lobsters. If you know ansible and letsencrypt you should be able to help out.
Well I don’t really know how the current Lets Encrypt cert was generated, but it’s literally just another argument. Did ask about it when it came up on IRC 3 weeks ago, but didn’t get a reply then, and figured it would probably be fixed pretty quickly then so completely forgot about it.
It was manually created with certbot but, as noted in the bug, should probably be replaced with use of acmeclient to have much fewer moving parts, if nothing else.
It’d be great to have someone who knows the topic well to help the issue along in any capacity, if you have the spare attention.
I’ve done entirely too much work with acmeclient to automate certs for http://conj.io and some other properties I run. Will try and find time this weekend to take a run at this.
That or to use dehydrated: in a text file, one certificate per line, each domain separated by a space.