At home: Filco Majestouch 2 with MX Blues
At work: Filco Majestouch Ninja with MX Browns (and O-Ring dampers), because I prefer my colleagues not to hate me.
I haven’t bought anything shiny in a little while though, so I’m eyeing up something a bit different like a Pok3r or and Ergodox-alike.
I have a FIlco Majestouch 2 (also with MX Blues) and after a few years of daily use it began suffering extreme keybounce. It’s unusable now due to chatter/key bounce that affects nearly all the common keys. I’ve given up on it but might disassemble it and see if a deep cleaning with distilled water helps.
Mine is coming up on 4 years old I think, and I’ve ad no issues at all. That said, it sounds like a fault and I’d try contacting the reseller or manufacturer to see if they have any suggestions. Mechanical Keyboards should last pretty much until the switch mechanisms give out, if not longer.
This was ordered (had to dig out the email archive) in mid-2010 and was used daily until late-2013 or 2014, based on other purchase dates. Meh. I’ll contact the original seller, elitekeyboards.com, but frankly I don’t have expectations because it’s nearly eight years old and they don’t carry that line any more.
I too have two Filco Majestouch - one at work and one at home, both Ninjas, with different switches! At home, the keyboard’s plugged into a Mac and has a problem with dropping keystrokes / being slow as I type in a browser (and sometimes elsewhere). I can’t find a fix so I’m likely to plug an Apple keyboard in instead :(
The only remappings I do are:
The post mentions ‘programming’ - I’m assuming this is referring to remapping. I did look for keyboards where I could record macros, but those that exist are very expensive.
Windows Modern Standby (Si03) is the one and only reason I’m glad I was impatient and bought an X1C5 instead of waiting for the 6. (Well, that and I saved about $300).
A sleep mode that doesn’t actually sleep, and requires proprietary ACPI extensions, where do I sign up?
Just another example of Windows (and I’m quite sure OS X is guilty of this too) thinking it knows better than me what I want my machine to do.
Your description is more or less on point.
That said, and this is what really gets me, is how Lenovo completely ripped out default S3 support. Hardware is completely capable of supporting both, but they took it out as a default.
Luckily the arch community created a great patch for fixing this issue across all linux machines (and BSD, iirc). Once I got S3, my standby time went from maybe 8 hours to somewhere greater than 2.5 days going by the drain rate between uses.
Looks like it got deleted, I archived it here: https://hastebin.com/boniyocefe.apl
Uh oh, i thought that dpaste was supposed to keep it for a year… sorry!
The “cached” link above has a copy https://archive.is/http%3A%2F%2Fdpaste.com%2F3Z4K4B0
I’m always looking for a cross-compiling system for building macOS executables from Linux, either as a single static executable, or as a self-contained relocatable bundle of (interpreter + libraries + user code entrypoint), because getting legal Mac build workers is such a pain.
The best toolkit I’ve found, by far, is golang Where you just GOOS=darwin go build .... There are a variety of more-or-less hacky solutions in the Javascript ecosystem, and a few projects for Python, but for Ruby this area is sorely lacking.
I mention this because while XAR looks like an awesome way to distribute software bundles, I still need to figure out a way to do nice cross-compiles if I’m going to use it to realistically target both macOS and Linux.
Tell me about it. I’ve tried cross compiling Rust from Linux to OSX and it was just a saga of hurt from start to finish.
For Go, did you need to jump through the hoops of downloading an out-of-date Xcode image, extracting the appropriate files and compiling a cross-linker? Or is that mysteriously handled for you by the Go distribution itself?
You literally just run GOOS=<your target os> GOARCH=<your target architecture> go build. No setup needed. Here’s the vars go build inspects.
It’s frustrating trying to do similar in compiles languages, and then interpreted languages with native modules are even worse.
Go basically DIYs the whole toolchain and directly produces binaries. That has pros and cons, but means it can cross-compile without needing any third-party stuff like the Xcode images. For example it does its own linking, so it doesn’t need the Xcode / LLVM linker to be installed for cross-compilation to Mac.
No reason you can’t put a whole virtualenv, python interpreter and all, into your XAR. XAR can pack anything.
You still need a tool to prepare that virtualenv so that you can pack it, and that’s the sort of tool I struggle to find - cross-compiling a venv, or equivalent in other languages.
Yes, exactly. I am less interested in different formats and more in a tool to create them. The ease of doing that with Go is the target.
The ease of doing that with Go is the target.
By this you mean, you’re looking for a solution for Python packaging that makes it as easy as Go to distribute universally?
I used this once before to take some code I wrote for Linux (simple cli with some libraries - click, fabric, etc.) and release it for Windows: http://www.py2exe.org/index.cgi/Tutorial
The Windows users on my team used the .exe file and it actually worked. It was a while back but I remember that it was straightforward.
A question for anyone who might have context – from this piece it seems like they have a cluster per restaurant, which doesn’t make much sense in terms of complexity versus payoff to my mind. The thing that would make more sense and be very interesting is if they’re having these nodes join a global or regional k8s cluster. Am I misreading this?
They seem to be using NUCs as their Kubernetes nodes, so the hardware cost isn’t going to be too great.
I imagine it’s down to a desire to not be dependent on an internet connection to run their POS and restaurant management applications, I’m sure the costs of a connection with an actual SLA are obscene compared to the average “business cable” connection you can use if it doesn’t need to be super reliable.
Still, restaurants have been using computers for decades. It looks as if they have a tech team that’s trying very hard to apply trendy tools and concepts (Kuberneetes, “edge computing”) to a solved problem. I’d love to be proven wrong, though.
I’ve never been to one of these restaurants but I can’t imagine anything that needs a literal cluster to run its ordering and payments system.
Sounds like an over engineered Rube Goldberg machine because of some resume/cv padding.
While restaurants certainly have been using computers for decades the kind of per location ordering integrations needed for today’s market are pretty diverse:
If you run a franchise like Chick-fil-A, you don’t want a downtime in the central infrastructure to prevent internet orders at each location, as it would make your franchisees upset that their business was impacted. You also want your franchisees to have easy access to all the ordering methods available in their market. This hits both as it allows them to run general compute using the franchisee’s internet, and easily deploy new integrations, updates, etc w/o an IT person at the location.
I have a strong suspicion that this is why I see so many Chick-fil-As on almost every food delivery service.
Beyond that, it’s also easier and cheaper to deploy applications onto a functional k8s/nomad/mesos stack than VMS or other solutions because of available developer interest and commodity hw cost. Most instability I’ve seen in these setups is a function of how many new jobs or tasks are added. Typically if you have pretty stable apps you will have fewer worries than other deployment solutions. Not saying there aren’t risks, but this definitely simplifies things.
As an aside I would say that while restaurants have been using computers for decades they haven’t necessarily been using them well and lots of the systems were proprietary all in one (hw/sw/support) ‘solutions.’ That’s changed a bit but you’ll still see lots of integrated POS systems that are just a tablet+app+accessories in a nice swivel stand. I’ve walked into places where they were tethering their POS system to someone’s cell phone because the internet was down and the POS app needed internet to checkout (even cash).
Most retail stores like this use a $400/mo T1 which is 1.5mbit/sec (~185kb/sec) symmetrical – plenty for transaction processing but not much else. Their POS system is probably too chatty to run on such a low bandwidth link.
It could just be a basic, HA setup or load balancing cluster on several, cheap machines. I recommended these a long time ago as alternatives to AS/400’s or VMS clusters which are highly reliable, but pricey. They can also handle extra apps, provide extra copies of data to combat bitrot, support rolling upgrades, and so on. Lots of possibilities.
People can certainly screw them up. You want person doing setup to know what they’re doing. I’m just saying there’s benefits.
Yes, “Nearly all users of version control are non-developers.” is the author’s thesis for why DVCS is bad. I personally also find it frustrating when someone writes a book about something they could communicate quickly and clearly.
Their argument for why it’s bad for development is that their drive must be clean when crossing the US border as though that’s a problem that most people have.
Their argument for why it’s bad for long lived development is “your checkouts will necessarily become too big and slow someday, if the project stays on this system long enough” , which the author really has no evidence for.
I think this is a fun idea, and I think your stated goal of a middle ground between “everyone uses GitHub” and “everyone self hosts everything” is worth pursuing.
What could be a killer feature for this though is an API to export all a user’s data in a simple format, say a .tar.gz archive with directories for email, git repos, pastes and CI logs. If you keep your feature set small it should be possible to automate this so users who are interested can just download their archive using cron.
If I could easily integrate this with my own backup regime, I’d consider using it for real work.
Easy backup/restore + custom domain would make this really usable. I just can’t see how anyone could use this without planning for when it goes down.
You’ve hit the nail right on the head.
Data portability (export/import) and social-graph/link portability (custom domains) are the two preconditions to make this kind of thing work (and why I’m uninterested in having an account on a mastodon server - no custom domain support).
All of this is on my personal roadmap, as stages 1 and 2 are mentioned on the website I have a stage 3 in my head which includes all of this. The main roadblock is the fact that Asymptote Club is cobbled together from a bunch of third party projects, and they don’t always have fantastic export support. Though, they are all open source, so I could add support in the future.
At least at the moment, you can rsync your home directory from the shell server, which includes mail. Pastes are meant to be ephemeral, for example to give someone a crash log, for now. I might be able to write a simple backup tool with the Gitea API to automatically clone a user’s repos.
As for custom domains, I could potentially add custom email domain support, but I’m not sure how I’d execute custom domains on the rest.
As for custom domains, I could potentially add custom email domain support, but I’m not sure how I’d execute custom domains on the rest.
This is gonna be really hard for anything but email. Most apps are not written with this use case in mind :L
I’ve given some thought to this.
For apps which store a domain in a config file, you could launch an instance on-demand and terminate it after some inactivity.
Of course, that could start using a lot of RAM/CPU pretty quickly; much better to use software which was built with this use-case in mind.
indeed. I’ve used ejabberd as well as openfire in the past. Cert-Handling (as an example) was hell with openfire.
Surely there is some good solution by now for the mess that is (was?) certificate trust stores on JVM based languages?
It’s supposed to be “sequel”, but I say s-q-l because sequel doesn’t make any sense to me.
I will, however, fight to death anybody who mispronounces router as “rooter”.
I will, however, fight to death anybody who mispronounces router as “rooter”.
What does a router do? It routes packets. It does not rout the packets, they are not fleeing the battlefield in retreat.
Do you pronounce “Route 66” as “Root” or “Rowt”?
(I realise this is an “American” vs “English” distinction, it’s just one that has always confused me.)
they are not fleeing the battlefield in retreat.
After my networking code gets done with them they are,
Evidently you follow Klingon coding best practices.
I say /ɹaʊt/ 66, always, but /ɹuːt/ 66 doesn’t bother me nearly as much as when talking about routers.
This looks racy to me, can someone explain where I’m going wrong?
Thread A is the first to acquire the benephore, checks and increments the atomic variable, finds it is zero and proceeds without acquiring the semaphore.
While Thread A is in the critical section Thread B arrives and acquires the benephore, finds the atomic variable to be non-zero so acquires the semaphore. Because it has the semaphore it proceeds into the critical section. Now there are two threads concurrently in the critical section.
What prevents this scenario?
I think you’re right, unless I’m missing something obvious.
Worse still, if T1 enters the critical section and is followed by T2, if T1 now makes progress it will find benaphore_atom > 1 and call release_sem() on a semaphore it doesn’t hold. Which is probably either a crash or UB, who knows.
I was missing something obvious.
The semaphore, is initialized into an ‘unavailable’ state.
When Thread B attempts to acquire the newly initialized semaphore, it blocks as the semaphore is in its ‘unavailable’ state.
Thread A later finishes up in its Critical Section, and seeing that benaphore_atom > 1 it increments the semaphore, allowing Thread B to make progress.
At the end of this execution, T2 sees !(benaphore_atom > 1) and continues without marking the semaphore as available.
Semaphores don’t provide mutual exclusion.
You use them to e.g. count the number of items in a queue and wake up enough threads to process them. Then those threads use a separate mutex to synchronise any operations on the queue.
So, why isn’t Theo called on his rants more often?
We even have a nice little epithet ready-made: DeRants.
I think people who aren’t up for that particular brand of interaction just avoid the project – which is probably how he likes it!
I was using OpenBSD for pf and relayd for a few years, but I didn’t participate in the mailing list.
Because it happens much less often than you or anyone else believes.
I’m participating in all the project related mailing lists daily - it’s easier to find someone completely not related to the project doing a rant on our list than it is to find Theo finally pushed into replying.
Now go through the emails from the same time-period looking out for Rupert Gallagher ie. in the SSD TRIM thread - note you won’t be able to on marc.info as he uses protonmail. That’s a person not related to the project, Theo just stands out to you as he is a known person and people trigger him with emails like the one quoted below:
Date: Wed, 06 Dec 2017 03:15:57 -0500
From: Rupert Gallagher
To: Mike Burns
Cc:
Subject: Re: TRIM on SSD
I know well that article, because it is several years old with no updates.
Those working on ffs should do what they are supposed to do. Lack of money? Setup a stickers sale or a kickstarter, get the money and just fucking do it.
Sent from ProtonMail Mobile
edit: removed emails from the headers, no point feeding spam bot crawlers.
I don’t disagree that there are plenty of abrasive posts from others on -misc (probably more so than on any other list I’m subscribed to… well other than cypherpunks, but that’s another story…). I can’t help but think that the tone of some of Theo’s posts has encouraged others to post in a similar vein.
Yes, I know “shut up and show me the code”, but surely newbies need to start somewhere?
Is that a deliberate feature of Proton Mail or a happy accident? I fail to see how a service like Proton Mail can work for a mailing list scenario, surely the mail is sent in plain text as per normal?
Well you can read it and pass it through a base64 decoder. It’s just something the marc.info mail archive software is not able to handle and the user decided not to disable that in his protonmail settings. It’s not for security.
Out of curiosity, why? I’ve always considered it to be superior to have build configuration versioned directly with code. You can guarantee lockstep changes with build and code and you only need a single to to understand code and how it’s built.
We use Jenkins with configuration solely in Jenkins at the moment and it’s inferior to if we used Jenkins files in our repos.
I hope I answered in my other comment, at least to some extent.
Building (and testing) recipe shouldn’t be part of CI configuration file, it should be part of Makefile or whatever is used for your language toolchain/building framework. You should be able to build application w/o having your own CI instance. Sure, Makefile won’t install packages needed to build software (dependencies should be mentioned in README file), as it’s a distro-specific thing, so that’s why CI environments need preparation step, and it needs to be stored somewhere. I just don’t like storing such things in project’s repository, as it’s strictly related only to tooling wrapping the project, and this tooling may change at some point, it’s CI-specific (and therefore often distro-specific) thing, so it’s meta project workflow thing. If you want it versioned, I think it should be versioned separately.
If you care about compatibility, you should be able to avoid lockstep changes (your older source code should build fine with newer build preparation configuration or your newer source code should build fine with older build preparation configuration). And if it’s not possible, then it’s good thing to have clear indicator as build failure, because it will most likely hit users too.
README (or INSTALL) file should be enough to let user/developer know how to prepare building environment in their own distro (which may not necessarily be your distro or distro used by CI).
tl;dr (simplistic): Today you may be using one CI, but it may be other thing in future. It’s not crucial part of the project (even if very useful), thus it shouldn’t pollute its development repository. That’s my view.
This is my pet hate with Jenkins, every time we open a new longer-lived branch that deserves a CI build, it’s an exercise in copying config from one browser tab to another. I’d love to just be able to modify a .jenkins.toml file.
If you use a newer Jenkins with Pipelines, you can use a Jenkinsfile.
Just got an email internally telling us the password expiration window has been reduced from 180 days to 90 days. There’s no use sending this article to the powers that be internally, somewhere there’s a checkbox in a security window dressing form that requires 90 days expiration and that’s what it’s going to be.
It’s doubly awful when you have a couple of disjoint authentication domains, say an AD arena and a UNIX one, and they have inconsistent expiration periods, length & complexity requirements, and reuse policies.
Best of all are the systems where you can’t change your password until it expires, thus making it impossible to keep in sync with other domains.
I wonder how many of these “best practices” are simply word-of-mouth beliefs handed down from management to management with little control over exactly how well they’re sourced.
Oh well next password change here will be a passphrase. I’ve been relying on a little C program that generates nicely pronounceable passwords but it’s time to uppgrayed I think.
>While Kodi is undoubtedly the most popular media player software in the world right now
That’s news to me, I’ve never even heard of Kodi. Are we sure it isn’t VLC or iTunes or Windows Media Player? Where are usage stats to back this up?
I imagine that Kodi’s user population is difficult to measure. I have seen Android HDMI sticks that ship with a custom Kodi build enabling users to browse various streaming media repositories. It’s a fascinating ecosystem that seems to be largely invisible to people in the American Netflix/Hulu world.
It’s largely due to the American world of streaming services with reasonably large back catalogues and favourable geographic restrictions.
The less salubrious parts of the British media have been on a bit of a crusade against what they have dubbed “Fully Loaded Kodi Boxes”, FireTV clones or equivalent android boxes, with Kodi preloaded and a bunch of unofficial addon repositories enabled.
The unofficial addons of course allow the (generally less tech-savvy) user to view pirated streams of TV shows, and live channels, usually with somebody else’s advertisements superimposed or injected in the regular commercial breaks.
The usual suspects are obviously unhappy about this, as the only thing they hate more than internet pirates watching content they haven’t had their pound of flesh for, is the general public watching content they haven’t had their pound of flesh for!
Thus, you have fantastic, sponsored (allegedly) content from the likes of The Daily Mail, asserting that “Kodi Boxes” will literally kill your family [1].
It seems “Lobstered”:
The server is taking too long to respond; please wait a minute or 2 and try again.
Yeah, Pinched by the Lobsters should be the tagline for this kind of thing. I didn’t know we had enough readers to do it, though, just looking at the comments. Might be something else unless it’s a weak server setup.
Lobstered or otherwise, it’s still toast.
Here’s the IU mirror of the thread, for anyone looking: http://lkml.iu.edu/hypermail/linux/kernel/1711.1/04620.html
I agree that the concentration of data at Slack makes them a very valuable target. But I’m wondering if self-hosting is really safer than using Slack:
What is your opinion on this?
I agree, but there’s many reasons beyond those to run your own service, such as policy reasons.
My main complaint about Slack for FOSS projects for example is that Slack is policy-wise built around being a corporate chat (which has implications to privacy policy etc.).
Is there a service that works much like Slack but with a default-public design intent? Is Gitter the go-to for this kind of thing?
Gitter has become better in that regard (especially with moderation tooling).
Discord, Matrix and (almost) Zulip are options, but all with different drawbacks. Zulip has the drawback that moderation features are currently not in the hosted offering. Discord seems to lead the pack when it comes to moderation. I’m obviously not a full-time user of all of them.
As much as I dislike IRC, IRC as practiced has a very good model for FOSS: many channels, optional logging and clients geared towards being “AFK by default”.
Sadly, there’s almost no chat software built around the needs for open communities.
We’ve used Gitter and Slack for various OSS projects (both ours and others). Gitter’s great because it’s so easy for people to join. However, it doesn’t scale well as more people join a channel, because the search is really bad, no threading, scrolling through history is really cumbersome, etc. Also, the mobile app is terrible at notifications.
Slack definitely feels like a closed ecosystem. The workflow of getting an invite, then signing into the Slack client, etc. adds a lot more friction to the process. Plus, switching between Slack chats on the Mac client is SLOW.
the search is really bad, no threading, scrolling through history is really cumbersome, etc.
It feels to me you’re looking for email, not a chat.
That’s a simple statement to make, but the difference between email and chat is none of these properties.
Temporal and conversational characteristics. Chat is built around real-time exchange of short messages, mail is built around slower discussion with larger, self-contained messages.
Not really. We want real-time conversation with people. For example, someone may download Telepresence (one of our OSS projects) and struggle to get it working. Telepresence under the hood does a bunch of networking stuff: deploys a proxy in Kubernetes, sets up a VPN tunnel via kubectl port-forward, etc. So being able to talk to someone with problems in real-time (versus filing a GitHub issue, or email) is extremely helpful to accelerate the debug process.
And then, it would be nice to search through and say oh yes, so so had an issue with Mac OS X and kube 1.7 that was like this … but you can’t really.
YES this! I don’t hear this being talked about nearly enough, but I seriously dislike the whole model where individual communities are shuffled off to their own ‘teams’ or ‘rooms’ or whatever. IRC’s channels offer an invitation to collaboration and discovery - NONE of these services offer that, and I don’t understand why so many people are willing to throw the baby out with the bath water like this.
That address only the question of external attackers. But can you trust Slack to keep your data private? I think many people are also worried about this.
I also feel like we should definitely model “attackers” and “state actors” differently. Withing the law of the state Slack is in, the state can just walk in and ask and get things. No amount of anti-intrusion measures can counter that.
(In Germany, that’s the same, but at least everything happens in the territory I have my lawyers in :) )
I agree. It’s a very legitimate concern. For example, if I was working for a defense company, I would definitely not use Slack.
We treat slack internally as open to the internet (or assumed to be). No passwords in slack, no secrets of any kind. If the contents of our conversations leaked, it’d maybe be bad, but you have to assume someone will be reading them at some point regardless for potential compliance reasons.
Sure, but you have to know a lot more things. WP is easy to spam, because it’s pretty easy to find. Even if 60% of companies started using mattermost, you’d have to find their instance (is it chat.example.com or is it mattermost.example.com or… ?). Plus it’s not that difficult to fend off wide script-kiddie attacks like this. IP rate limiting, regular patches, etc. That’s not very difficult to do with WP or mattermost. The upside is Mattermost being a Go application has a much smaller footprint of attack surface, plus they take security fairly seriously, and release often. WP started with a negative security outlook and PHP only helped the problem.. PHP is like the opposite of safe and sane(it’s getting better, but still).
System Admin is hard, but if you aren’t investing in good people, then chat data is probably the least of your worries.. see Equifax, etc.
I’m quite sure Slack also rents virtual instances, so it’s the same issue.
I agree that if a company already self-hosts some services, then it makes sense to also self-host a service like Mattermost, which is easy to install and administrate.
Good point about Slack also renting virtual instances.
I know you are not addressing me, but: when you host yourself you can (easily) add additional layers of security. For example put the services behind a VPN.
True, but I don’t believe that perimeter security is very useful with the open networks we have nowadays (see BeyondCorp).
Possibly unpopular opinion but no. The word “drama” feels extremely dismissive. I really don’t like it when people dismiss anything as “drama”.
If you don’t want to read a comment chain, don’t open the comments. It’s not like the whole comment chain is directly embedded into the front page.
The entire point is to be dismissive, right?
Drama submissions are categorized mainly by not being that important/actionable to daily stuff and more importantly by being “reality-free zones”.
The Google thing, as a great example, is drama because:
Such things need to be aggressively purged from the community, because they’re toxic.
That can easily be abused. Suddenly somone thinks React.js or C++ in general are toxic and marks everything as such.
Absolutely, I was unfortunately stuck for words to suggest a less dismissive tag.
A tag like ‘communities’ doesn’t feel specific enough to identify the precise kind of thread I’m referring to though, perhaps ‘political’?
Indeed, you’re right. Honestly I’m starting to think that not tagging these at all and just letting Flag(Off-Topic) do its work might be the best solution.
‘lobster fire’
I suggest ‘lobster boil’ as the ideal term to describe any future lobste.rs hot topics :)
I was debating between “train wreck” and “dumpster fire” when the present term was suggested on IRC.
I’ll be using this suggestion from now on though…
Adding a drama tag signals that it’s ok to post such stories here.
I would much rather see a drama or “likely to generate awful discussion” downvote. We can flag as off topic, but that leaves room for people to argue “this drama is about technology so it’s on topic”.
I was worried that the problem with flagging such stories as off-topic and/or hiding them (or just stating in some rule or guideline that the aren’t on-topic for this site), could be considered to be “taking a position/side” on the issue. And that tends not to end well.
Right now I have three ergodox infinity (here’s the one I’m using now: https://imgur.com/a/us5PXto ) one ergodox-EZ, two kinesis contoured (ps/2 and usb) and an IBM Model M.
At the moment I only use the ergodox infinity keyboards and the online layout configurator, because I can’t for the life of me get the firmware to build. But I do regularly tune/change my layout in small ways.
Those keycaps are a work of art, care to share where you found them?
Oblotzky SA Oblivion https://www.massdrop.com/buy/massdrop-x-oblotzky-sa-oblivion-custom-keycap-set?mode=guest_open
I have both Oblivion and Hagoromo sets with colevrak and ergodox additions.