So good: https://mobile.twitter.com/KrangTNelson/status/924372609852231685/photo/1
account suspended. what was that?
The tweet was two screenshots.
One of Twitter user @KrangTNelson tweeting (paraphrased) “No thanks, I only get my crypto tips from the guy who made garfield”.
The second was a screenshot of Scott Adams Twitter account showing he had blocked Krang.
No idea why Krang was banned.
Best guess is a parody tweet promising “antifa super-soldiers” on November 4th, which some strange people took seriously and complained about. His account’s been restored.
A screencap of the tweet is here: https://davidgerard.co.uk/blockchain/2017/10/29/the-dilbert-ico-analysing-scott-adams-crypto-offering-whenhub-saft/krang-t-nelson-scott-adams-tweet/
I find e/E, b/B very quick, often better then f/F because there’s no need to pick a letter to jump to.
I use tig all the time; it’s a great tool. In particular, tig blame mode lets you jump to a line’s parent commit with , (and you can return to the previous state with <). This is great for finding the provenance of a given bit of code.
GPG is so simple. You and someone else generate some keys. You exchange them safely somehow validating each other. Then, just write stuff in a text file with boring name, seal it with GPG, and send the resulting file over some medium (eg email). Ignore all other functionality since it’s complicated or requires trusting third parties. Just do one-to-one with text files. The UI problems could even be scripted away or programmed as an extension into an editor.
The result: you get protection the NSA couldn’t break that works on diverse hardware and software (reduces subversion risk). Most people aren’t worried about NSA. Usually a weaker threat. So, something NSA had hard time with should be extra safe for them.
The problem is that simple, relatively easy to use work flow isn’t the one advocated. Instead gpg nerds go on about the web of trust and key signing parties and tell people off for doing minor things wrong.
Is there a gpg work flow documented somewhere that is as easy to use as signal and a verified key? I would love to use that.
Not that easy yet but simple enough to be made easy. Start with this:
Here’s the major steps:
Generating key, exporting one’s own public key, importing others’ public keys, encrypting a file for a specific user whose key is in database, or decrypting a file from the user. The front end just needs to be able to handle those actions. The whole thing might be reduced to an open or seal command in a plugin for a text editor for day to day use with extra commands in the menu for generate, import, export, or backup db. Alternatively, a modification of GPG itself to straight-up delete all the other crap or at least the interfaces to it importing the result into a GUI app with better interface.
The trouble is that all the boring, trivial UI stuff never gets done. Partly, I suspect, because no-one is ever paid to do it.
the guy developing gpg gets money, more than a lot of free software projects can dream of: https://en.wikipedia.org/wiki/Werner_Koch
i guess with that money a somewhat usable gui should be possible.
Remember that he got so little for so long he was thinking of quiting. Then, some emergency money was thrown at him largely without conditions after the press about that. So, it’s not the same as a person just making stuff with money coming in regularly with expectations by users for great UX. He can do it but is not incentivized to do it.
Even if he was, he’s not a UX expert - that’s something that requires a bit of knowledge, planning, research, and likely a big refactor afterwards.
Good point. Most programmers, esp for crypto stuff, aren’t UX experts. Hell, we’ve been seeing “Why Johnny Can’t Use My App” papers from them for some time now.
Of course, he could hire a UX designer, (or firm) but that’s quite a bit of money. Considering GPG’s status though, someone might be willing to do it pro bono.
usability is a well known reason why people don’t use it. why don’t take a part of the money and pay another developer to build and maintain a nice gui? if the wikipedia article is still correct, the donations of facebook and stripe equal $100000/y. even if it would be reduced to 50k due to taxes, it is still a nice amount of cash in germany: “In Germany, the average household net adjusted disposable income per capita is USD 31 925” http://www.oecdbetterlifeindex.org/topics/income/
Good point. Alternatively, like with my experience, the programmers just hack together a solution that will work for them and their local audience. Then, don’t put in further effort to develop it into more general solution for wide audience. I didn’t even publish mine since they were very, very specific to my use case.
keybase has done quite a decent job making a more user-friendly interface to GPG (CLI and GUI).
Are they still encouraging users to hand over their private keys to them? That puts it in the bad sector as far as I’m concerned; if I’m going to be trusting a central organisation I might as well just use Facebook messenger.
Good point. I loved the Keybase concept when I last looked into it. Since this topic keeps coming up, I might try out their client in the near future to see if I can offer something better than GPG cheat sheet haha.
If candidates are so highly sought-after, couldn’t they request extensions on their offers?
I realize it’s a bit risky, and the real challenge is disseminating this knowledge.
I suspect the exploding offer or vanishing signing bonus is a tactic that takes advantage of the young candidate’s lack of experience with interviewing and getting hired. They just spend a bunch of money on an expensive degree; it seems foolish to risk giving up a signing bonus or an entire offer to wait for a better offer that may not materialize.
I’m curious if this signals that they are becoming less sought after – in other words, that companies realize they have the upper hand, and can strongarm candidates. I have no way of knowing, though.
Up-voting because the original thread and the linked rebuttal are interesting reads.
What it comes down to is that the OpenBSD developers believe that re-implementing a user-land network stack is silly because of the risk it introduces, and the rebuttal says that is woefully outdated thinking because of the high demand for specialized and dedicated user-land networking. Given OpenBSD’s philosophy and valid point about maintenance costs, I think the rebuttal is unfair. If netmap is critical for some specialized application, couldn’t one go use it on FreeBSD? Expecting distros to have the same philosophies about user- vs kernel-space, generality vs specialization, etc defeats the purpose of having multiple distros to begin with.
I switched to org-mode a while back, and love it. I used vim and a plain text file very successfully for years before that. My first stab at org-mode failed hard because I tried to use too many features. Now, my working model is similar to the plaintext file, but with handy shortcuts: top-level bullets with the date, notes and TODO items indented below. TODO items are trivially searchable with C-a t. I also customize the TODO states. This file also acts as my engineer’s notebook. That’s it!
what’s interesting about this?
It’s a toy implementation of a simple virtual computer, which is a great tool for learning about instruction sets, registers, memory, etc.
Read this while listening to the best of Hans Zimmer for a truly inspirational read: https://www.youtube.com/watch?v=AAaUoOOUFA4
Took me a while to figure out what bothered me about this post – it makes the deployment choice for components (e.g. threads vs processes vs machines) sound almost trivial. It’s anything but. If a component is deployed in-process, perhaps using green or native threads, it’s reasonable to use a blocking, fine-grained API that communications with native domain objects. If it’s deployed as a REST service, this means using an asynchronous API, using a more coarse API to balance out the additional latency, choosing a serialization format, adding more monitoring…the list goes on.
For a high-level whiteboard conversation, this assumption is fine. When building a production system, it’s not.
I agree with you that the devil is in the details, and there are a ton of details that need to be considered for us to not care whether something is in-process or not. This can be true for GC pressure, thread pool usage, memory usage, CPU consumption, context switches, practically any resource.
On the subject of programming model, I wonder if it might be possible to end up with the best of both worlds.
For fine vs coarse graining, we can have an abstraction which automatically batches for us–consider Promise Pipelining (à la Cap'n Proto, or E), or Haxl’s automatic batching (this is redundant, but I don’t have a better name for it). We can imagine a system where we program against a fine non-blocking interface, with the understanding that it will batch it for us, planning the query as efficient as possible.
With asynchronous vs synchronous, we should in theory be able to reap the benefits of a synchronous style with an asynchronous style. A sufficiently sophisticated model could figure out that doing it synchronously will reap rewards, and adjust, so that although it’s written in an asynchronous style, it’s actually executing asynchronously.
With that said, it definitely depends how far you’re willing to go on the abstraction scale, and it might not be worth it.
I’m learning org mode once and for all.
I’ve maintained a text file for years now which is a combination to-do list/engineer’s notebook. Over the last year, I’ve had trouble maintaining a good structure while working on multiple projects at a time. Reducing the number of simultaneous projects is not an option, so I looked at other options including a concerted effort with Evernote. None have fit the bill so far, but I have hopes for org-mode.
Back-pressure is the name of the game when it comes to queueing. I’m interested to see how the reactive streams project turns out.
In the systems I work on, fixed-length blocking queues are prevalent. They work well in two ways: 1) when the queue is full, adding to the queue blocks the caller, providing back-pressure; and 2) depending on the type of work, a worker can drain N items from the queue to process in batch, which improves throughput.
I was expecting to read about a replication bug. Turns out to be a very nice, clear explanation about a potentially confusing discrepancy between the stats reported by a redis master and slave.
I’m working on an automated deployment process, starting with packaging up Scala applications into an rpm. Currently packaging uses sbt-assembly and deployment uses scripts managed by puppet.
We use sbt-assembly to package up everything to a .deb which is deployed via Puppet. It’s a Makefile, a Debian rule file and then some .install scripts - pretty straightforward but I wish it were nicer.
I’m really hoping the sbt2nix project will make things a bit nicer:
This is from early 1978; it was before EWD had switched to writing the EWDxxx series entirely in his handwriting, so reading the HTML transcription is better than reading the original PDF.
It has a few gems in it:
some people found error messages they couldn’t ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate “the ease of programming” with the ease of making undetected mistakes.
(PHP is perhaps the modern paragon of this questionable virtue, although Perl, Forth, and assembly language have held the crown previously, and JS has attempted to contest PHP’s position.)
the modern civilized world could only emerge —for better or for worse— when Western Europe could free itself from the fetters of medieval scholasticism —a vain attempt at verbal precision!— thanks to the carefully, or at least consciously designed formal symbolisms that we owe to people like Vieta, Descartes, Leibniz, and (later) Boole.
This is a case where a one-line aside from an EWD does a better job of describing a subject than the entire Wikipedia article on it.
The importance of notation as a tool of thought was a major theme of elite computer science at the time: that was the title of Iverson’s Turing Award lecture about APL the year after this EWD, and had also been the subject of Backus’s rather worse Turing Award lecture the year before, which Dijkstra famously blasted in EWD692. Backus’s lecture and attendant research, despite its serious flaws, inspired much of the work in functional programming during the 1980s, although of course LISP and ISWIM were inspirations from 1959 and 1966, respectively. ISWIM looks a hell of a lot like modern ML.
On a related note, I’ve often noticed that our programming languages are very poorly suited for handwriting: they underutilize the spatial arrangement, ideographic symbols, text size variation, and long lines (e.g. horizontal and vertical rules, boxes, and arrows) that we can easily draw by hand, instead using textual identifiers and nested grammatical structure that can easily be rendered in ASCII (and, in the case of older languages like FORTRAN and COBOL, EBCDIC and FIELDATA too.) This makes whiteboard programming and paper pseudocoding unnecessarily cumbersome; even if you do it in Python, you end up having to scrawl out class and while and return and self. self. self. in longhand. Totally by coincidence, this morning on the bus on the way in to work, I was coding Quicksort in a paper-oriented algorithmic notation I’ve been working on, on and off, over the last few years, to solve this problem. I would include a sample here, but I don’t yet have anything digitized.
self. self. self.
Can you elaborate on your paper-oriented algorithmic notation? Sounds interesting.
At present I’m using these conventions:
for (foo; bar; baz)
So here’s a variant of a common example which can be more or less rendered in Markdown and Unicode:
point @x @y
@r = √@̅x̅²̅+̅@̅y̅²̅
@θ = atan2 @y @x
⼻ Δx Δy
@x, @y ← @x + Δx, @y + Δy
Alternatively you could write that last method, whose name is an ideogram for “step”, this way, which is probably how I’d normally do it:
⼻ Δx Δy
@x += Δx
@y += Δy
You can see that it uses very few pen strokes compared to more ASCII-oriented notations, but without sacrificing rigor.
What do you think?
I’ve hacked up some examples with CSS and HTML. It’s pretty imperfect still, but well enough explained to criticize.
I think this is pretty neat. It also raises an interesting pedagogical question, which is when you should show this to someone if they’re learning git for the first time. I think that this is probably one of the last things you should show someone who is new to git, and that it’s important to understand what all of these things are going to do semantically before you hand someone a cheat sheet. I tried to learn git via cheat sheet style, and I had a terrible time of it until I went and actually understood what was going on.
Now, I use cheat sheets for reminding me of the syntax, and make sure I understand how it’s going to rearrange my DAG before ever actually running any commands. Things are better now.
I think showing this early would be helpful, because it shows a string of commands used in sequence. This is more useful for someone trying to get done than individually-described commands in the man pages.
The importance of understanding what the commands actually do goes without saying. When I was learning git, I would create some temporary fake repos to reconstruct a given situation and then would run different commands until I knew how they worked.
I think showing this early would be helpful, because it shows a string of commands used in sequence.
I wouldn’t show this to anyone since it advises using git push -f to rewrite remote history. That’s a great way to break other people’s branches, lose other people’s commits, etc.
git push -f
Not that it’s directly comparable pedagogically but…
Maybe somebody could find out when they’re introduced to pilots / astronauts?
At least for pilots, these are generally called “checklists” instead of “flight rules”. They are different for each aircraft also, for instance, a lightplane’s checklists may only take up a few pages since they are so simple, but an airliner’s may be up to 500 pages.
Checklists are also not just for takeoff and landing, an airliner has checklists for everything from a broken gear light to an engine failure.
Checklists are introduced to pilots as soon as they leave ground school and begin training in the plane itself. When flying you always use checklists to ensure that you did not forget something, especially in an emergency. They also help the pilot in deciding what to do, for instance, it would say whether to lower the gear or try a belly landing for each different landing surface.
Not sure if it really relates to git though, since we as programmers are rarely in a situation where a checklist could be the difference between life and death.
+1 for both an RSS URL and a Twitter field.
You can create sum types in C and C++ with unions.
You need a union and a struct (to store the tag).
Development: MBP, iTerm 2, tmux, fairly customized vim setup, ever-present sbt console, sometimes IntelliJ with the Scala plugin; git (often via tig) for source control; github’s pull requests for code review. Macports for package management, though I do have a few things installed with homebrew.
Hipchat for chatting with coworkers, IRC for keeping a finger on the pulse of various open-source projects.
Gmail for work mail, fastmail for personal, jira (with greenhopper “agile” plugin) for shared task tracking, well-formatted text file as my engineer’s notebook. Confluence has documentation.
1Password for password managment.
Alfred for an application launcher, clipboard manager, URL-d/encoding text, opening terminals from a Finder window, simple math.
Dash for documentation, accessed via an Alfred command.
I was very happy switching to tmux. For a long time I thought screen was good enough; tmux is notable nicer.
All code created by teams must be licensed with Apache License v2.0
Awesome! Nice to see the corporate backers of this hackathon setting up this rule.