Monaco hanging storage bags. They’re what they use at pharmacies to hold all the prescriptions. The bags are transparent and hang from a rod so they don’t bunch up or get lost under each other.
I tried all the methods about tubes and filing cabinets and so forth and hated all of them. This is the only solution I’ve found that is actually good.
Coincidentally, this approach came up on Cool Tools today https://kk.org/cooltools/organizing-cables-and-other-gear-using-hanging-storage-bags/
Just to be clear, it sent the names of files in the payload, not their contents. That’s still a breach of trust, and potentially a very serious one for some use cases, but this headline overstates the scale of the problem.
From what I can see it’s at least names, sizes and modification(?) dates. So names understates it.
But yes, it really depends on the use case. I certainly think on a backup service there’s a chance that a file name maybe mentioning some activity (health related for example) and a date might be more compromising than an out of context content (random x-ray scan) might be.
Of course in the given example that’s probably not the biggest issue, but who knows how many products have the potential to use backlaze accounts in various contexts. Tying file names important enough to be backed up professionally to Facebook profiles seems hard overstate.
I really dislike R, but credit where it’s due.
As far as I can tell, as an outsider, a big part of the reason R’s external API is so stable is that it doesn’t actually change that much internally. I work on a project that patches R’s source code to modify its I/O behavior. It’s part of my job to reapply the patch every time there’s an R release. The early days of the project are before my time, and maybe we just picked a lucky bit of the tree to mutate, but as far as I can tell, the patch has been applying cleanly since 2013 (R 2.15.x). 8 years later, after seven minor releases and dozens of maintenance releases (3.0.x to 3.6.x), 4.0.x was the first time we’ve had to resolve a merge conflict. It’s a very stable codebase!
(Unrelated to this post, so I hope this isn’t a derail, but as it happens, the reason for the merge conflict was the decision for the 4.x series to rename the --slave
argument to --no-echo
.)
This set of questions doesn’t cover everything, and it may cover things you explicitly don’t care about, but I think it does a great job of illustrating a process that can allow you to put together a set of questions that are useful, fair and have robust grading rubrics: https://jacobian.org/series/unpacking-interview-questions/. (I also think a couple of the questions are really good, and plan to steal them next time I’m interviewing.)
On the more technical end, I found this rant a useful reminder that there is no perfect interview process: https://software.rajivprab.com/2019/07/27/hiring-is-broken-and-yours-is-too/. Which is not to say you shouldn’t do the best job you can, or that they are all equally bad, but any process has limitations and weaknesses you should be aware of.
I guess the central point of this talk is: ‘The “Design Patterns” solution is to turn the programmer into a fancy macro processor’ but that is not necessary in higher level languages, where building blocks (such as iteration) are built into the language.
I wasn’t aware of this talk, but I wrote a short blog post making a similar point many years later: https://mike.place/2018/patterns/. I was coming from a different direction: I learned higher level languages first, and simply could not see the point of these patterns, which seemed like hugely over-abstracted one-liners.
If you get a chance, I would highly recommend this talk https://www.deconstructconf.com/2017/brian-marick-patterns-failed-why-should-we-care.
I’d strongly recommend removing “dumb” from the name unless you want to have to deal with that being A Thing for the foreseeable future.
Seconding. Someone’s going to see “dumb” and “down” together, think “This person has something against those with Down’s Syndrome,” and raise an avoidable stink. Dealing with such a reaction would take time away from developing this standard, regardless of its validity.
Maybe something like SimpleDown
or Markless
or Markwork
instead.
I made a related point last time he posted stuff from this project.
Very interesting feedback from you and the others here on this thread.
I see no reason why I can’t just call the language “Scroll”, since that’s what I call it internally in the code now anyway.
I’m not 100% sure, because I do love the phrase “dumb it down”, and think a lot of what I do each day is try to “dumbdown” things for short. To me there’s no connection wired in my brain to those bad connotations. I like the ethos of “dumbdown” a lot too, because to me I see a lot of pompousness and ladder kicking in the programming world, and think one great way to leave the world a better place is to remove unnecessary complexity from a subject once you’ve mastered it so that other people can move quickly through that stretch and on to better things.
I would be a little mortified if anyone affect by Down’s is offended. Definitely not something I would have been aware of without these comments. Personally to me there are a few words like that that for most people are innocuous but to me make me cringe b/c I know they offend people that I love. I hate to inflict that on anyone, but at the same time language is messy so I still need to think on this one more because I don’t know what a good decision making rubrick is.
Done! Thank you and @mlw and @christianbundy for bringing this to my attention. This wasn’t a connection I had been making in my head but given that you all brought it up, and that there was no technical reason to stick with that name, makes me think it’s good to change it now.
Appreciate the feedback and help!
A reasonable point.
I like the word “dumb” as in, “we’ve spent a lot of time making this thing powerful but also as simple as possible, so even if you are tired and groggy—aka in a dumb mode—you will be able to use it safely.”
I use this monstrosity, which I found here https://ses4j.github.io/2020/04/01/git-alias-recent-branches/
lb = !git reflog show --pretty=format:'%gs ~ %gd' --date=relative | grep 'checkout:' | grep -oE '[^ ]+ ~ .*' | awk -F~ '!seen[$1]++' | head -n 10 | awk -F' ~ HEAD@{' '{printf(\" \\033[33m%s: \\033[37m %s\\033[0m\\n\", substr($2, 1, length($2)-1), $1)}'
It does almost the same thing as the one in the OP. The difference is it shows every branch you’ve checked out, rather than every branch you’ve actually changed. Granted, the alias is insanely ugly, but I find it more useful as a list of recent “work in progress”. That’s because my definition of “work” includes reading code as well as writing it.
Practicing self-care in what I assume will be a difficult week by:
Thanks for sharing! Skimmed it a little and bookmarked to read later when I have time. This looks awesome.
There’s a lot of awesome work going on in federated learning / privacy preserving ML. I heard a talk by someone at Google at NeurIPS 2019 about how they use it for their keyboard prediction (among other things) and thought it was super exciting. Got me down the rabbithole of reading papers in this space, although I haven’t had a chance to apply it in more practical settings (yet).
Glad you’re interested! Google’s keyboard prediction work is perhaps the most prominent deployment of this stuff so far, and the papers are great. Exactly how that feature works as a piece of software is a little opaque IMO, and I’m a big fan of this blog post as a more accessible real world example: https://florian.github.io/federated-learning-firefox/.
Yeah I’ve read that post. It made me wonder why search engines haven’t yet tried to do local learning for recommendations, or what that might look like if implemented…
Do you know if anyone is distilling information in this space (academic / practical applications)? Writing about it is on my TODO list and I finally have time to start consolidating information / putting together resources, but thought I’d see if others already do this. I’m aware of a few yearly reviews done by prominent blogs but not sure if there are niche writers who I haven’t found yet!
The report I wrote was an attempt at that, although it’s pretty high level, and it’s now two years out of date. (I posted it here because it had just been unpaywalled.) The report was the basis for a StrangeLoop talk last year https://www.youtube.com/watch?v=VUINeZUAlx8 or https://mike.place/talks/fl/ (slides) . The last slide has a bunch of references, but they are all >= 1 year old.
Other than that, the recent reviews I’ve seen are all quite academic (no blog posts that I know of, although I’m sure they exist). Probably the most useful academic review I’ve seen is this one https://arxiv.org/abs/1912.04977. But because it reviews open problems (rather than solved problems!) it necessarily doesn’t have much to say about the real world practicalities. https://arxiv.org/abs/1902.01046 may also be useful, but is a little vague and very Googly.
I have used anki before for learning foreign languages. Spaced repetition is key to memorization.
I am, however, very surprised that there’s people out there memorizing function call signatures. It never crossed my mind to do that. My intuition is that it’d take the joy away of learning anything new.
But then, I know that for some people memorization isn’t torture. Maybe this is what’s going on here?
Author here. I don’t memorize function signatures (or anything else my IDE helps me with). I do memorize anything I have to look up more than a few times.
It’s a good weekend project, nothing to be embarrassed about.
I remember having the same issue and selected Asciidoc as a format. The format gives better control over layout, such as placing the headshot on the top-right corner. And the Asciidoctor tooling outputs PDF by default. Unfortunately, the resume contains too much private information to be open-sourced right now.
I use LaTeX for mine. One word of warning:
placing the headshot on the top-right corner
This is generally a really bad idea. Companies are starting to care about implicit bias in their hiring process and if the first thing you see is a photo then that maximises the likelihood of implicit bias influencing shortlisting decisions. To avoid this, someone in HR will do a pass over the CV and strip out things like this. The hiring manager will see a mangled version of your CV.
If you want to avoid this, don’t put age, ethnicity, citizenship, or a photo anywhere on your CV. Companies may request this information (particularly for right-to-work checks) separately, but that goes through HR not to the person making the hiring decision. In some jurisdictions, taking any of this information into account in the hiring process is illegal. The simplest way of avoiding legal risk is to just throw away any CVs that include this information into the bin.
Do you know if they are also stripping the name from the CV?
With Asciidoc it’s quite easy to customize the CV with conditionals so it’s not necessarily a problem if they want the document with just the body of the content.
I’ve only rarely seen that done. There is research showing that it’s a good idea, but humans aren’t good at remembering candidate numbers, so going though a stack is quite tricky.
I wonder whether this is a US thing…
I just checked several large companies in my area (in Germany) and their application forms all have an explicit field for a photo. It wasn’t marked as required, though.
If it’s a separate field then it may only be used for identifying candidates when they come in for interview and not presented to the hiring manager.
It’s a good weekend project, nothing to be embarrassed about.
For anyone wondering what this is about, it’s in response to my description on this post, which I guess should have been a comment: “I feel kind of embarrassed submitting this trivial weekend project but it was an itch of mine that nobody else had scratched and perhaps someone other that me will find it useful.”
The format gives better control over layout, such as placing the headshot on the top-right corner.
If you’re happy with the control of asciidoc then no worries, but just in case you were interested in dropping down to Markdown source: I was surprised by how easy it was to format the plain HTML output (no classes or ids) with CSS. E.g. in the example resume.md, the line of contact details below the name is a <ul>
that I style with the h1 + ul
CSS selector. Not having the ability to apply classes etc. to specific elements makes styling a particular instance a little fragile, and I wouldn’t play tricks like this across an entire website, but for a single page resume I think it’s fine.
So, for example, you could put the <img>
immediately before (or after) the <h1>
and then select it uniquely in CSS with h1 + img
and shift it up to the corner with the usual tricks.
That said, I agree with the advice that you should not include a headshot in a resume in 2020! And I especially wouldn’t use Europass format, if that’s where you’re getting this idea from. (See, e.g. https://twitter.com/brkzkn/status/1283785183187988482).
I always use Chrome Headless for my resume, made in HTML+CSS (source). I’ve been happy with it: here’s what it looks like as PDF.
I tried to use Firefox’s PDF facilities, but it couldn’t properly render links, which I have on the PDF version of my resume. So I’m stuck with chrome for the time being.
If you want to drop Chrome for this use case, I would give weasyprint a try. I was pleasantly surprised that it just worked, without any configuration.
The review says: “Even with relatively quiet switches, the open construction means that the sound of the keys getting released is audible in most environments.” Which makes me wonder, are there mechanical keyboards that are particularly quiet (for a given switch) because of their chassis?
The classic 42-key Atreus is a bit quieter, but this has more to do with using Matias Quiet Click switches (with a built-in rubber bumper) than the chassis construction, though I expect using wood for the chassis helps some.
You can open up the Kailh switches in the Keyboardio Atreus and add rubber bumpers to each switch, but it’s a somewhat involved process. You might be able to buy MX-compatible switches with the bumpers preinstalled nowadays; I haven’t looked into it. The keys are hot-swappable though.
There are the Cherry MX silent red that have built in rubber on the bottom of their stems so they dampen the impact when bottoming out the key. They’re linear. I used them for while; they are very quiet and a joy to type on.
In some cases, adding a neoprene mat (like a full-desk mousepad or something similar) underneath a mechanical keyboard can make it quieter, assuming that the bulk of the noise comes from the chassis transmitting vibrations to the desk. A solidly-constructed metal backplate should help as well.
From personal experience I know that different material and build-style cases, different material/thickness keycaps and different mounting styles all affect the sound, see for example this video.
I’d be interested in a follow-up on this piece. Clearly the world has changed since this was written: “I’m not too concerned about the Coronavirus; judging by the numbers the mortality rate is quite low.”
My position on this hasn’t really changed.
My point was that we need to look at the data, and reject sensationalism. The people who say COVID-19 is a conspiracy theory are wrong. The people who say it’s the apocalypse are also wrong.
As the world has learned more about the virus that causes the disease, it appears the mortality rate is actually lower than initially thought.
I’m not sure what more I can say on this.
My point was not that that sentence was wrong. My point was rather the article was clearly written before hundreds of thousands of people died.
If you wouldn’t change a thing about the article and you don’t plan to change how you think about travel in the current context (a pandemic that has killed hundreds of thousands of people, in part thanks to people spreading it through travel and placing stress on the healthcare infrastructure of small communities) then … well, that answers my question I guess.
This is insane.
I am in a country that handled the pandemic exceptionally well. According to official figures, not a single person has died here.
The lockdown here only lasted three weeks. During that lockdown, of course I meticulously followed all the rules.
Are you trying to suggest that people in general should not travel internationally because a pandemic might occur? Because that is ludicrous.
It feels as though you’re making a sanctimonious moral judgement based on how you perceive my reaction to the pandemic. As in, I am a bad person because I don’t care enough about those who died. If this is accurate, then you are wildly misguided.
What do you think I ought to change about the way you think I think about travel?
It feels as though you’re making a sanctimonious moral judgement based on how you perceive my reaction to the pandemic. As in, I am a bad person because I don’t care enough about those who died. If this is accurate, then you are wildly misguided.
This is not accurate. I guess I wasn’t clear. I’m glad you live somewhere it hasn’t been a problem! (Many places on your heat map haven’t been so lucky!) I’m also not saying anything about what you personally have done during covid-19. I have no idea what you’ve been up to or where you live.
Are you trying to suggest that people in general should not travel internationally because a pandemic might occur?
No. Obviously people should not travel internationally at the moment (and in particular should not travel to less rich countries or remote locations where local healthcare infrastructure might be stressed by visitors) because there is a pandemic happening right now. I don’t think this is controversial. Indeed it’s pretty much the law for many destinations.
I revived this because I was interested in whether what has happened has made you think more broadly about your attitude to globetrotting after this is over. I don’t think this is an “insane” possibility. A lot of people and organizations are changing their attitude to where they live, work and travel. But in your case I guess not. In which case I thank you for your time.
A lot of people and organizations are changing their attitude to where they live, work and travel. But in your case I guess not.
The changes we are seeing are more companies allowing their employees to work from home.
Allowing employees of my company to work from wherever in the world they wish to is something I established on day zero.
So I’m not exactly sure what else I am supposed to change.
You don’t need to do git config --global core.excludesfile
. This file has a default path (.config/git/ignore
on many systems, but see man gitignore
). Just use that file.
I’ve seen this approach as https://github.com/garybernhardt/dotfiles/blob/master/bin/git-churn. But going further and assuming these files are the “pain points” seems like a stretch. In fact, I think a strong power law is usually what you want here. The alternative, a flat distribution, makes it harder to find your way into a codebase, because every file is equally important.
I’ll be interested to see if Neovim is able to cherry-pick this Vimscript work and get the best of both worlds: performant Vimscript for backwards compatibility and/or new users migrating from Vim, and performant Lua for greenfield config/plugin work.
I remember hearing in a presentation that neovim team does not want to spend time on Vimscript 9. Here is one issue mentioning that it is not a priority https://github.com/neovim/neovim/issues/13625, and a wiki https://github.com/neovim/neovim/wiki/Merging-patches-from-upstream-Vim#types-of-not-applicable-vim-patches stating that patches with Vimscript 9 are “Not Applicable”
That’s somewhat unfortunate, but I get it! Probably a large undertaking for a relatively small perceived benefit, especially given the trend towards “Lua all the things” in the nvim community.
All things equal, I think most people would (objectively) prefer to write Lua (applicable elsewhere) vs Vimscript (niche, doesn’t have modern language features currently, etc…), especially if you’re starting circa 2022 without any legacy baggage.
That said, I also understand Vim’s/Bram’s prioritization of backwards compatibility!
IME vim9script is not backward compatible with legacy vimscript. They can call functions defined in the other language but that has been true of vimscript and lua for years. I think the lack of lua scripts for years. I think the primary reason people have preferred vimscript over the various other supported languages is that FFI details are difficult to get a feel for when also trying to make something new. It’s easier to keep track of quirks in the code you have in your visual frame than it is to keep track of quirks that only exist within the FFI bridge.
Vimscript 9 support is explicitly a non goal of the project charter.
These language improvements seem to suggest the scripting interface will also provide type safety when you call vim builtins. I have to imagine that’s either been missing from neovim, or it’s a lot of neovim-maintained validation definitions that could have mistakes. I’d definitely want to get this benefit from the first party.
It may be possible to write compiler for vim9 in Lua and then use that.