I don’t see why this progress bar should be obnoxiously put at the top of the page. It’s cool if you wanna do a donation drive but don’t push it in the face of everybody who comes here. Honestly at first I thought this was a bar for site expense. Then I realised it’s to ‘adopt’ an emoji.
Lobsters isn’t a daily visit for most readers, probably even for most users. They can’t see it to join in if there isn’t anything visible for it, and it has an id for adblocking if you prefer not to see it.
Personally a check this site quite regularly on my mobile device… which doesn’t have an ad-blocker.
That sounds awful. If you’re an android user, normal uBlock Origin works on Firefox for Android just like it does on desktop. :)
Or use Block This!, which blocks ads in all apps.
Oh, that’s a cool little tool. Using a local VPN to intercept DNS is a neat trick. Unfortunately doesn’t help with in this case because it blocks requests to domains and not elements on a page via CSS selectors.
That does make me want to actually figure out my VPN to home for my phone and setup a pi-hole, though.
Ohh! Good to know, thanks.
Firefox 57+ has integrated adblocker nowadays, on both desktop and mobile; plus, there’s also Brave.
That is still annoying that I need to setup my adblocker to fix lobste.rs. So much for all the rant articles about bad UX/UI in here.
maybe one could just add a dismiss button or sometimes like that? I don’t find it that annoying, but I guess it would be a pretty simple solution.
I concur, either a client side cookie or session variable.
Well, yeah… that’s how you could implement it, and I guess that would be the cleanest and simplest way?
It’d be great to see data about that! Personally I visit daily or at least 3 times a week. Lack of clutter and noise is one of the biggest advantages of Lobsters. And specifically, I looked at the link, and I have no idea who this Unicode organization is, or their charitable performance, or even if they need the money. I’d imagine they are mostly funded by the rich tech megacorps?
 ;-)
Adopting an emoji isn’t the end goal: the money goes to Unicode, which is a non-profit organization that’s very important to the Internet.
If this bar actually significantly annoys you, I’m surprised you haven’t literally died from browsing the rest of the internet.
Glad you’re enjoying the language :)
Be sure to pop into #nim on Freenode (or Gitter or Discord) if you’ve got any questions.
It sounds like they’re on EC2, but haven’t migrated their thinking away from what you might do with physical servers. A different thing they could have done is build a new AMI, launch a new VM based on it, unmount & detach the EBS data volume, reattach the data volume on the new VM, and move the EIP. Basically the “pets vs cattle” idea.
I was thinking that myself. They’re on disposable cloud systems, why are they doing anything except throwing them away?
I got the impression they are on multiple cloud providers, not all of which support moving IP addresses.
I agree that separating the egress address from the app server would simplify things though.
FYI, on Linux you can use setcap to grant specific binaries access to bind to low numbered ports. Gone are the days of needing to run daemons as root. Once saw a team use iptables port forwarding to get around this, and it actually used a huge amount of cpu.
This solution sucks though when the binary you need to permit is java or python or something anyone could write code with. The FreeBSD MAC solution is better because you’re permitting a user.
This is disappointing.
With an automated, zero-cost CA, there are very few legitimate cases for wildcard certificates, and the risks increase with their use.
I don’t understand why LE couldn’t simply allow for higher thresholds on certificate issuance, and instead support certificates that are actually a worthwhile goal: free S/MIME that doesn’t involve suckling at the Comodo teat.
The biggest use case for wildcard certs is SaaS. If I have 10,000 SaaS customers with hosted domains like customer.example.com, LE wouldn’t want to issue (and renew!) that many certs. It also may exceed their rate limiter.
Yes, this is exactly why I can’t use LE for my business right now.
LE creates SAN certificates, which let you group together multiple domains under one certificate. So you can use LE for a SaaS product like this if you’re clever about automatically grouping domains together. See: https://letsencrypt.org/docs/rate-limits/
I know that LE can support up to 100 domains in the same certificate with SAN certificates. But I feel like the complexity implied by grouping domains together is not worth the few hundred bucks of a wildcard certificate.
I’ve not known many companies that want to publish their full customer list so publicly :)
What are the risks for wildcard certificates?
I do like the option when it’s there. For example when SNI is not available and you are running low on IPs.
The main concern is phishing.
If you look at your URL bar and see a green lock next to https://www.paypal.com.mysite.biz/login.php, you’re a lot more likely to log in.
[Comment removed by author]
I agree. If you can prove you own the domain, shouldn’t you be able to call your domain whatever you want and get a certificate for it?
So the real risk, it seems to me, is in the way you show that proof. If the CA asks for this proof in a way that’s not secure, that to me would be a problem.
You may be interested to know that browsers limit wildcard certs to one level deep, for this reason.
What does this risk have to do with phishing?
In any event, the CAs aren’t the right place to solve phishing, services like SafeBrowsing are.
I like supporting wildcards but I do wish they’d dramatically increase the rate limits and decrease the suspension time. Getting banned for a week after a fuckup or bug is nuts.
To see what a remarkably diverse speaker line-up looks like for a very technical conference, check out Syntaxcon in Charleston, SC. And 100% of these talks were top-notch.
This is a cool writeup, but honestly this is not “Docker on OpenBSD” – it’s Docker running on a Linux VM on OpenBSD, the same as a Linux VM running on any other hypervisor.
This is also how Docker runs on Mac OS. So I’d consider it a valid solution to interoperating with other platforms that run Docker. Sure it’s nothing magical, but it’s nice to know that virtualization on OpenBSD has reached a point where, if Docker is a requirement for your job (or whatever), it no longer means you can’t run OpenBSD as your host OS.
They use xhyve to do the lifting, which is based on FreeBSD’s bhyve!
That’s why I tagged it Linux, OpenBSD and virtualization. Editorializing titles is against lobste.rs rules.
Still I find the story interesting as it shows at what level vmm virtualization is at right now. Unlike Linux VM hypervisors, vmm is a very young codebase.
I was kind of interested in running an IPFS Wikipedia mirror. I got through the install and 10GB download. But then the instructions said “now tell people the URL of your server”. But I don’t know anyone who wants this – that’s the point, to “help society”. I kind of assumed this would be like running a Tor node where some kind of discovery protocol would send traffic to my mirror. Did I miss something?
You only need to tell people the URL of your server if you want to provide a public gateway. By mirroring the content and pinning it your node will distribute the content automatically to people who request it within the IPFS network, or via other public proxies.
Why make one for $3500 instead of hundreds for $100 or thousands for $20?
What happens when you make thousands of CDs and don’t sell them all?
I’m guessing the organizational costs and stress related to taking care of a thousand sales wasnt as worth it as one big sale, also the bidding gives it value and a story.
increase the size of the volume and expand the file system to match, with no downtime
Are there Linux filesystems that can be safely expanded without unmounting the volume?
not a linux expert at all – but my understanding is ext3 (and thus 4 as well) can be resized without un-mounting first.
xfs_growfs supports online growth of XFS fileystems (although it doesn’t support shrinkage).
Like pyvpx said, ext3+4 supports online resize.
It also looks like Btrfs supports online resize.
Is there any Linux filesystem that can’t be safely expanded without unmounting the volume? :)
Indeed, there are a number - ext3, ext4 and XFS, along with btrfs and ZFS too.
Tldr: building EC2 machine images (AMIs) with the needed Docker images pre-pulled reduces instance launch time.
Except, then I have another step in my deploy process. A docker container shouldn’t be so large this would matter anyway, especially if pulled from a source close to the VM (e.g. S3).
Using tools to create docker images and using tools to create AMIs seems like overhead. Maybe they will change the docker containers for new ones once the AMI is running, but still it feels like I would just deploy onto the AMI and spare myself some docker-related pain.
Sounds like 10% of the functionality of ZeroMQ?
I’ve been a Linode user since 2005 and can’t recommend them highly enough. Excellent support, great performance and good prices (not to mention continuous free hardware and virtualisation upgrades over the years). There have been a few security hiccups over the past few years (mostly because of their previous web platform) as well as DDOSes, so bear that in mind. Neither have affected me though.
I also use AWS and Azure but they’re not really in the realm of “VPS provider”. FWIW, two providers I’ve not used but who I’ve heard good things about are Vultr and Bitfolk (the latter have a UK-only presence though).
There have been a few security hiccups over the past few years (mostly because of their previous web platform)
I wouldn’t say that. The causes of some of their security hiccups have included an admin panel publicly exposed on the internet, user data stored on an internet-accessible machine that wasn’t monitored by their security team, not changing admin credentials that were compromised in a previous hack, and using ColdFusion (I kid, but apparently their ColdFusion stack had major obvious misconfigurations).
Stuff happens, but also their response in some cases has been pretty poor. They’ve avoided/buried disclosure, kept a potential compromise under wraps for months before disclosing, glossed over that they couldn’t figure out how one of the compromises happened and usually downplay as much as they can when they do disclose.
Good points - I’d forgotten just how badly they’d handled some of the security issues. It was the previous ColdFusion-based frontend I was thinking of (as that was the source of a number of problems in the days before the more serious security problems).
Same here. For straightforward VPS duty, I can’t think of a reason to go elsewhere. Currently I just have a small FreeBSD instance serving static webpages but I ran my YC startup on Linode as well and never had a reason to complain.
Agree, I’ve loved Linode for years for my small “tinker box”. They are one of the few providers to include native IPv6, and they will even route you a /64, useful if you want to run your own v6-in-v4 tunnel (a la tunnelbroker.net). Fast and knowledgeable support. And they just did a major WAN network upgrade which cut my VM’s ping latency in half. You can usually find a $10 credit coupon from conferences or certain web sites.
“We demonstrated this with a real war-driving experiment
in which we drove around our university campus and took
full control of all the Hue smart lights installed in buildings
along the car’s path.” – Probably a good idea not to confess to what might be a felony in the middle of a research paper.
Maybe better to link to the original article rather than Slashdot: http://news.softpedia.com/news/cryptocurrency-mining-malware-discovered-targeting-seagate-nas-hard-drives-508119.shtml
Article says this attack is mitigated by “relying on memory with error-correcting codes”. Are there cloud hosts, or even “private cloud” deployments that are not using ECC memory?
Link gives an error.
It’s a marc.info problem. They have two servers which are out of sync with each other so it’s luck as to whether you get the one with recent messages available or not.
Here’s a pastebin copy of the text.
FYI, you can normally view broken marc.info links using archive.is or google’s cache. Lobsters' ‘cached’ button is very useful for this
This kind of issue is why we have RE2. It lacks a few features that are sometimes handy in complex scenarios, but is designed to be not subject to pathological cases causing exponential CPU time usage. https://github.com/google/re2
Headline is misleadingly truncated. I was concerned Linode DNS is shutting down, but really they’re moving the public servers to Cloudflare and a small number of people who have custom glue records need to update them.
As an aside, I’ve moved DNS off an IP address before. The set-up is simple enough: stand up a DNS server at the new IP address, update DNS, wait two weeks, shut down old IP address. The wait two weeks is a conservative period for propagating that change across the internet.
What I found surprised me: The traffic drop-off on my old DNS server decayed almost perfectly over that period until it was receiving nothing but scan queries (e.g., shadowserver.net) at the end of two weeks. I seriously expected much worse behavior out of the internet.
Instead I got exactly what you’d predict. Nearly the opposite of what has happened to me moving MX records. Legitimate mail from a tiny minority of providers can still arrive at your old IP address for a stupid long time.
Interesting experience. I assume this is because the following of NS delegations is performed by a recursive DNS server, which tends to have a sane implementation, whereas the MX -> A -> IP resolution is performed by, and cached within, the MTA software, which in many cases has questionable/naive implementations or configs.