1. 1

    I’m still on Jekyll for my blog, although at this point I’ve spent a fair bit of effort hacking on the Hyde theme to suit my [lack of] tastes. As a publishing platform it’s proved pluggable enough and while I’d be interested in discarding the Ruby ecosystem in favor of consolidating on Python that’s not motivation enough to prioritize shaving that particular yak.

    On the themes front, I’ve seen these floating around in the past and some may tickle your fancy -

    1. 17

      I use DOOM Emacs, after a lot of deliberation. I’d used a vanilla Emacs config for the good course of 4 years before realizing that DOOM did everything I was already doing, and doing it faster.

      My config is literate, and tangled automatically by DOOM’s doom command-line utility. You can find the entire config here if you’re interested.

      Also, if you’re planning on using any form of language server via LSP mode, I’d recommend building the Emacs 27 pre-release: its native JSON parser makes it far faster.

      1. 10

        I also migrated from my own bespoke literate Emacs configuration to doom. It’s feature rich and fast. It feels like I switched from Linux from Scratch to a regular distribution.

        1. 8

          I too did the great migration from vanilla Emacs to Doom recently, and wrote about it: https://blog.jethro.dev/posts/migrating_to_doom_emacs/

          I don’t really miss my old config, but haven’t fully gotten used to the keybindings in Doom.

          1. 5

            +1 to DOOM. I switched from Vim to Emacs in maybe ‘15 for the Clojure support (CIDER is a must-have) and maintained my own config because I didn’t see anything attractive I wanted to adopt at the time. I recently decided that my homebrewed config was far too slow and a pain to maintain and made the switch to DOOM which has been almost entirely win. I’ve ported a few bits here and there forwards, but overall with evil-mode turned off I find even stock base DOOM excellent.

            1. 3

              Ok, DOOM is fast, but your config could be just as fast. I mean, I managed to pull of those those startup times and loadings (with margin of error of few or tens of mili seconds which are tolerable).

              People just don’t read documentation of use-package. (or aren’t using it at all which is even worse) I made my config lazy in mathematical sense, don’t load anything unless I start using it. Rest are minor tweakings and tricks but you get the idea. With a bit of work your config could be just as fast. If you must have vim configs than it’s fine, I’ll choose DOOM over spacemacs any time of the day.

              1. 3

                Yeah I’m another doom emacs convert. I started off working with a literate emacs config. It was painfully slow no matter what I did. Doom starts almost as fast as vim, even faster once the daemon is running.

              1. 2

                I TA’d a class using this spec years ago - https://github.com/arrdem/batbridge/blob/master/doc/batbridge.md - and have a bunch of stale blog posts on microarchitecture and most of an assembler built targeting it as a result.

                1. 2

                  Yes. @arrdem@cybre.space. I have an account on mastodon.social but am “migrating” it.

                  Mostly a read channel to keep up with @technomancy and other folks who have understandable objections to Twitter.

                  1. 14

                    Having previously worked on a large monorepo (Twitter, 10s of GiB) I acknowledge the VCS side of this for all @ahal has provided an excellent rebuttal.

                    For me monorepos are about coordination and ergonomics. It’s not that you can’t build out tooling to make polyrepo work. You can. But what does it buy you? Is coordinating merges actually a good idea? Is bumping versions across many repos and cutting new CI/CD releases all the time and churning bump PRs really a win over admitting that you have many coupled artifacts and managing them in a single monorepo?

                    Monorepos help provide a technical solution to organizational problems of code sharing, and make it easy to build out new projects using shared test and deploy pipelines. In my experience that can be a huge win.

                    Joe Armstrong and others have had strong words recently about premature code sharing and library-first coding styles which lead to widespread coupling and make refactoring more difficult. But there really is no substitute for monorepo style large-scale change impact based (integration) testing when you do have pervasive code sharing.

                    1. 2

                      Don’t you need to build out tooling to make monorepo work as well?

                      For example, you need something that analyses which tests to run for a pull request. If you just run all of them all the time your CI is quickly overloaded, isn’t it? In a polyrepo environment, you can simply run all tests for each change in a single repo.

                      1. 7

                        In a polyrepo environment, you can simply run all tests for each change in a single repo.

                        The breakages I run into time and time again are at the interfaces between the modules, which wouldn’t be tested by this strategy. So, now you need to build tests that pull all of the service dependencies of the polyrepo and runs your integration tests across all of them.

                        1. 6

                          Yeah, it is frustrating to see answers that appear to ignore reality like that, by such logic when you check in a change to a dependency you just run its unit tests and bam done!

                        2. 5

                          For example, you need something that analyses which tests to run for a pull request. If you just run all of them all the time your CI is quickly overloaded, isn’t it?

                          Your CI can and should scale to your complexity. In my experience, this isn’t a major issue and moreover – isn’t dependent on polyrepo or monorepo – either way you need to make sure the system as a whole is still working not just some corner.

                          In a polyrepo environment, you can simply run all tests for each change in a single repo.

                          This assumes I guess 100% test coverage on the edges of each polyrepo, which is an astonishing amount of work, I would argue orders of magnitude more work than scaling your CI infrastructure.


                          Additionally, it is about where the tooling and annoyance lives to a degree. Where you park the complexity. Parking it on the developers machine, requiring large checkouts, indexing tools, yada is well worn and understood territory. Changes are atomic and logical.

                          Moving that complexity into polyrepos requires tools that touch each repo, try to do a lot of fanciness that are far more dangerous IMHO – and when they go bad, are catastrophic to roll back.

                          1. 4

                            You do - but if you use pants or bazel or buck, that’s off the shelf tooling and something you set up once. In polyrepos impact analysis remains impossible, and while yes you can “just” re-run all your tests when you bump versions, that’s far less than ideal. Is re-running all your tests because you bumped a version because unreached code was added to a dependency actually something you want to spend time waiting for?

                            My particular issue with this article is the author’s claim that the version bumping tooling is any more “off the shelf”. I don’t think it is at all, and for a claim like that the author really needed to bring receipts not handwave at the existence of multi-repo refactoring tools especially when the available monorepo build tools are so good and fairly easy to deploy.

                            Furthermore the core algorithms are pretty simple. I wrote katamari in my nights and weekends because I wanted to take a stab at a monorepo tool for Clojure, and the core build implementation is a mere 300 lines including the impact detector you need for incremental testing.

                            Where would you rather spend your time and complexity budgets? I think that spending time on an ongoing basis (coordinating merges, release times being long, review-heavy workflow) miss-prioritizes when the complexity of stronger build tooling is fairly manageable.

                            1. 9

                              In polyrepos impact analysis remains impossible

                              I have worked in tools oriented positions pretty much my whole career: this is absolutely true. Polyrepos are …. not good…. in a corporate setting.

                            2. 2

                              Absolutely - the important consideration is how hard each set of tooling is to build / setup.

                              For small organisations, you’ll typically be adopting existing small-org tooling which largely expects polyrepos.

                              Once you have thousands of engineers, you typically want in-house tooling for security reasons, and it’s much easier to build new tools against a monorepo.

                          1. 1

                            @arrdem, can I ask if you use a static site generator / template for your site? If so, which one? I really like the design…

                            1. 1

                              I do! This site (and others) are built on Jekyll. The theme I use is a copy of the Hyde theme I’ve hacked up over the last several years. The color scheme and logo were made with https://logojoy.com

                            1. 5

                              @arrdem, you’ve got an issue with your power calculations and/or units. Watts are units for measuring instantaneous power and Watt*hours are power over a given time. So “8kw per day” doesn’t make much sense. You might just have kW and kWh switched, I didn’t look close at the numbers. Might be why things aren’t making sense.

                              1. 2

                                @azdle, yeah that checks out. The numbers clearly didn’t work when I tried to do the dimensional analysis this morning, I think the mistake comes down to me recalling kwh = w/h whereas kwh = w * h which is why I went and ballparked off of PG&E’s power analytics.

                              1. 5

                                I’ve had a FANTASTIC time writing my own database, and I intend to deploy something based on it to “production” thanks very much :D

                                1. 10

                                  I’ve always liked Geany but I found the more I dug into a specific language in my career, it could’t keep up with what I expected an IDE to do. I also didn’t want to dive into writing plugins to make it feel more like an IDE that already exists.

                                  I’ll still prefer Geany over the Electron text editors out there like Atom and VS Code.

                                  1. 4

                                    Yeah. I used Geany for a year or two back in school and found that it was a fantastic text editor, but really didn’t offer anything beyond that. Emacs, Vim or a real heavier weight IDE wound up being my go-to tools due to superior integrations with the various languages I was working.

                                    I’d still reach for Geany for some quick editing if I didn’t have my full emacs instance booted.

                                    1. 1

                                      How do you feel about Sublime? It’s also part of the “hand-coded UI” camp of text editors.

                                      1. 1

                                        I never took the time to use it. It is literally never an option in my head when I think about editors. I know people seem to really like it and that’s cool!

                                      1. 1

                                        OK, thanks!

                                        1. 1

                                          Question is why it’s taking this long to just generate a new cert with the extra SAN…

                                          1. 4

                                            No one is paid to work on lobsters. If you know ansible and letsencrypt you should be able to help out.

                                            1. 1

                                              Well I don’t really know how the current Lets Encrypt cert was generated, but it’s literally just another argument. Did ask about it when it came up on IRC 3 weeks ago, but didn’t get a reply then, and figured it would probably be fixed pretty quickly then so completely forgot about it.

                                              1. 1

                                                It was manually created with certbot but, as noted in the bug, should probably be replaced with use of acmeclient to have much fewer moving parts, if nothing else.

                                                It’d be great to have someone who knows the topic well to help the issue along in any capacity, if you have the spare attention.

                                                1. 1

                                                  I’ve done entirely too much work with acmeclient to automate certs for http://conj.io and some other properties I run. Will try and find time this weekend to take a run at this.

                                                  1. 1

                                                    That or to use dehydrated: in a text file, one certificate per line, each domain separated by a space.

                                          1. 5

                                            battlestation(s). At work I run a company-issued Macbook using spectacles to get as close as possible to a TWM interface. Most of my work is done in emacs and an ssh terminal. Keyboard is a Das in Cherry Blues with o-rings. Dual 24” outboard displays.

                                            At home I use a Dell XPS 15 for most of my programming. My home OS of choice is Arch Linux with Xmonad as my emacs and intellij bootloader. Nothing really fancy going on there, but it’s stable and I’ve come back to it time and time again.

                                            The Dell and my gaming rig share a 4k monitor, and there’s a USB3 switch to swap my mouse and keyboard back and forth. The desktop is a brand spanking new i7 + EVGA 1080 build which I haven’t put Linux on (yet).

                                            Synology NAS for visible on the bottom shelf. It gets used mainly for backups (borg backup is amazing), torrents and blobstore.

                                            1. 1

                                              +1. Mechanical sympathy, algorithmic performance and tuning are some of my favorite material on here and I’d love to see a tag for them.

                                              1. 11

                                                Apologies to the author, but I find the comparisons to Haskell in this piece to be facile, and the practical examples of ways to solve the “nil problem” in Clojure to be glossing over the fact that nil remains a frustrating problem when nil is essentially a member of every single type (EDIT: although I should acknowledge that yes, the author does acknowledge this later in the piece–I just don’t find the rest compelling regardless). I don’t buy “nil-punning” just because it’s idiomatic (see the linked piece by Eric Normand); as far as I’m concerned talking about it as idiomatic is just putting lipstick on the pig. And fnil is a hack, the existence of which illuminates how Clojure didn’t get this right.

                                                That said, I’m not a lisp guy in my heart, so maybe I’m missing the subtle elegance of nil-punning somehow–but I’d love to see a Clojure without nil in it, and I think it’s telling that these articles always end up defensively comparing Clojure to Haskell.

                                                I believe that Clojure has some nice ad-hoc facilities for building composable abstractions, it’s got a lot of good stuff going for it and makes me want to shoot myself far less than any programming language I’ve used professionally to date. But as a user of the language I’ve found its type system, such as it is, to be incoherent, and I find the pervasiveness of nils to simply be a shitty part of the language that you have to suck up and deal with. Let’s not pretend that its way of handling nil is in any way comparable to Haskell (or other ML-family languages) other than being deficient.

                                                P.S. Per your first if-let example: when-let is a more concise way to express if-let where the else term is always going to evaluate to nil.

                                                1. 11

                                                  Glad I’m not the only Clojurian who had this knee jerk reaction. nil is not Maybe t. Maybe t I can fmap over, and pattern match out of. With f -> Maybe t at least I know that it’s a function which returns Maybe t because the type tells me so explicitly and the compiler barks at me when I get it wrong. This gives certainty and structure.

                                                  nil is the lurking uncertainty in the dark. The violation of the “predicates return booleans” contract. The creeping terror of the JVM. The unavoidable result of making everything an Object reference. I never know where nil could come from, or whether nil is the result of a masked type or key error somewhere or just missing data or… a thousand other cases. Forgot that vectors can be called as functions and passed it a keyword because your macro was wrong? Have a nil….

                                                  Even nil punning doesn’t actually make that sense because it mashes a while bunch of datastructures into a single ambiguous bottom/empty element. Is it an empty map? set? list? finger tree? Who knows, it’s nil!

                                                  Edit: wait wait but spec will save us! Well no… not unless you’re honest about the possibility space of your code. Which spec gives you no tools to infer or analyze.

                                                  Have a nil.

                                                  </rant>

                                                  1. 4

                                                    That said, I’m not a lisp guy in my heart, so maybe I’m missing the subtle elegance of nil-punning somehow

                                                    Minor nitpick–please don’t lump all lisps in with nil-punners! Racket and Scheme have managed to avoid that particular unpleasantness, as has LFE.

                                                    1. 3

                                                      Not to mention Shen with a type system that subsumes Haskell’s…

                                                      1. 3

                                                        Even in Common Lisp, nil isn’t an inhabitant of every type. If you add (optional) type declarations saying that a function takes or returns an integer, array, function, etc., it isn’t valid to pass/return nil in those places. I think this is more of a Java-ism than a Lisp-ism in Clojure’s case.

                                                        1. 1

                                                          Isn’t nil indistinguishable from the empty list though, in Common Lisp?

                                                        2. 2

                                                          Yeah, just take it as further evidence that I’m not “a lisp guy” in that I conflated all lisps together here–sorry!

                                                          1. 1

                                                            In that case I’ll also forgive being wrong in the ps about if vs when.

                                                            1. 1

                                                              Oh–would you explain further then? I always use when/when-let in the way I described (EDIT: and, I should add, many Clojure developers I’ve worked with as well assume this interpretation)–am I misunderstanding that semantically, fundamentally (from a historical lisp perspective or otherwise)?

                                                              1. 4

                                                                The ancient lisp traditions dictate that when you have two macros and one of them has an implicit progn (aka do) then that one should be used to indicate side-effects.

                                                                I know not all Clojure devs agree, but when you’re arguing that you should be less concerned than CL users about side-effects it’s probably time to reconsider your stance.

                                                                1. 1

                                                                  Thanks!

                                                                  EDIT: I want to add a question/comment too–I’m a bit confused by your mention of side-effects here, as in my suggestion to use when vs. if in these cases, I’m simply taking when at face value in the sense of it returning nil when the predicate term evaluates to false. I can see an argument that having an explicit else path with a nil return value may be helpful, but I guess my take is that…well, that’s what when (and the when-let variation) is for in terms of its semantics. So I guess I’m still missing the point a bit here.

                                                                  …but nonetheless appreciate the historical note as I’m a reluctant lisper largely ignorant of most of this kind of ephemera, if you will forgive me perhaps rudely qualifying it as such.

                                                                  1. 4

                                                                    The typical Lisp convention is that you use when or unless only when the point is to execute some side-effecting code, not to return an expression, i.e. when means “if true, do-something” and unless means “if false, do-something”. This is partly because these two macros have implicit progn, meaning that you can include a sequence of statements in their body to execute, not just one expression, which only really makes sense in side-effectful code.

                                                                    So while possible to use them with just one non-side-effecting expression that’s returned, with nil implicitly returned in the other case, it’s unidiomatic to use them for that purpose, since they serve a kind of signalling purpose that the code is doing something for the effect, not for the expression’s return value.

                                                                    1. 3

                                                                      Thanks mjn, that helps me understand technomancy’s point better.

                                                                      Fully digressing now but: more generally I guess I’m not sure how to integrate these tidbits into my Clojure usage. I feel like I’ve ended up developing practices that are disconnected from any Lisp context, the community is a mix of folks who have more or less Lisp experience and the language itself is a mix of ML and Lisp and other influences.

                                                                      In any case, thank you and technomancy for this digression, I definitely learned something.

                                                        3. 3

                                                          Oh I don’t think this solves the nil problem completely, but it’s as good as you can get in Clojure. If you’re coming from Java and have no or little experience in an ML-ish language, then this conceptual jump from null to Nothing is a bit difficult, so this article was primarily written for those people (beginner/intermediate Clojurists).

                                                          Also you have to admit that, beyond Turing completeness, language features are an aesthetic. We can discuss the pros and cons of those features, but there is no One Right Way. A plurality of languages is a good thing - if there were a programming language monoculture we would have nothing to talk about at conferences and on internet forums :)

                                                          1. 6

                                                            Also you have to admit that, beyond Turing completeness, language features are an aesthetic. We can discuss the pros and cons of those features, but there is no One Right Way.

                                                            It sounds to me like you are saying that all ideas about PL are of equal worth, which I disagree with. One can have a technical discussion about PL without it ultimately boiling down to “that’s just, like, your opinion man”

                                                            1. 5

                                                              That’s not at all what I’m saying. It’s like in art, you can say that the Renaissance period is more important than the Impressionist period in art history, but that doesn’t mean Renaissance art is the “correct” way to do art. We can also have a technical discussion of artworks, but we weigh the strengths and weaknesses of the artworks against the elements and principles of art and how skillfully they are composed, we don’t say one artwork is correct and the other is incorrect. This is the basics of any aesthetic analysis.

                                                              Likewise in programming language design, we can discuss the technical merits of features, e.g. yeah Haskell has an awesome type system and I really enjoy writing Haskell, but that doesn’t mean Haskell is categorically better than Clojure. When you have a ton of legacy Java to integrate with (as I do), Clojure makes a lot of sense to use.

                                                              1. 3

                                                                PL criticism is done on utilitarian grounds, not aesthetic grounds. You acknowledge as much in your second paragraph when you give your reason for using Clojure. I guess you can look at PL as art if you want to but I don’t like that mode of analysis being conflated with what researchers in the field are doing.

                                                                1. 2

                                                                  PL criticism is done on utilitarian grounds, not aesthetic grounds.

                                                                  Why not both?

                                                                  1. 1

                                                                    When you’re trying to figure out which language is better, it’s not a question of aesthetics.

                                                                    Though to be fair, “better” is really difficult to nail down, especially when people end up arguing from aesthetics.

                                                        1. 2

                                                          Wow cmake is taking a lot of heat here. Admittedly I’m not doing anything particularly interesting with it, but CMake has served just fine as the build system for one of my toy projects - dirt - dirt/CMakeLists.txt. Admittedly I mostly interact with it via a bunch of bash - dirt/dirt and I’m only targeting a single platform but compared to figuring out the pattern language to use core Make or figuring out Auto-tools which…. didn’t seem particularly user friendly when I looked at it briefly.

                                                          1. 5

                                                            Maintaining a stack by yourself in code seems rather unfortunate (look at how much longer the code is than the generator version), and very much points to a language deficiency. There’s no excuse for a function call being slower than a list access, and tail call support would remove the stack depth issue.

                                                            1. 6

                                                              Tail call support wouldn’t get rid of the need for a stack here because this function calls itself more than once aka “isn’t tail recursive”.

                                                              I would try to add the code from the article with comments showing where it is not tail recursive with comments, but pre-formatted text is not working for me at the moment.

                                                              1. 2

                                                                You’re right. I find it hard to believe that stack depth would be an issue at least for a balanced tree, but again if it is Python really should offer a way to have deeper stacks in the language rather than have users end up writing their own stack emulation.

                                                                1. 5

                                                                  So this precise issue was the root of a performance problem in the clojure.tools.emitter.jvm project which I used to work on. Essentially the code generator worked by emitting OpcodeList = List[Either[OpcodeList, Opcode]]. The resulting structure was then traversed one opcode at a time, which meant that you had to recursively call next() on arbitrarily many lazy generator terms in order to get the next opcode you wanted to emit.

                                                                  Using the stack of iterators pattern described here becomes an optimization then, because you elide n-1 calls to next() which have to walk back down the stack of iterators to the bottom-most iterator in order to get the next actual opcode you want. Because you’re traversing back up and down many many nested stateful iterators, there isn’t a way to optimize this recursion ala with TCO because you still have to go down n-1 iterator structures to figure out what buffer you’re actually taking the next() of.

                                                                  In comparison with the stack of iterators pattern, your next() just needs to peek the top of the stack, take the next from that, and recurs only if it’s another iterator structure. This means you traverse down the entire depth of the tree precisely once, rather than doing so once for every leaf of the tree.

                                                                  1. 5

                                                                    there isn’t a way to optimize this recursion ala with TCO because you still have to go down n-1 iterator structures to figure out what buffer you’re actually taking the next() of

                                                                    The mutable variable in the example stack-of-iterators code, or the one you’d get from a clojure loop/recur, is what provides that benefit. In fact, Clojure performs this optimization for nested LazySeq objects: https://github.com/clojure/clojure/blob/e547d35bb796051bb2cbf07bbca6ee67c5bc022f/src/jvm/clojure/lang/LazySeq.java#L58 - Note that it flattens lazy seq thunks recursively and elides that work on future .seq() calls via the synchronized mutable field.

                                                                    Sadly, the optimization is lost if you’re making your own nested structures because you have the vector -> seq -> vector loop going on. You could recover it a mutable variable for the top element on the stack. That’s precisely what TCE would provide as long as you could ensure that the recursive call was actually in tail position. Sadly, the ‘next’ interface is not ideal for that. You need something that returns both the next item and the continuation. So instead of (next seq) -> seq, you’d want (uncons seq) -> [head tail]

                                                                    In the case of tools emitter, you could probably also have achieved this effect without mutability via an automatic splice-on-construction collection type. Instead of [:blah … [:foo …] …. :bar] you’d have (spliced :blah … (spliced :foo …) …. :bar) and pay that cost on construction.

                                                                    1. 1

                                                                      Is there a standard name for a list uncons that returns an option/maybe? It’s a function I’ve often found myself wanting.

                                                                      1. 1

                                                                        Pattern matching and related constructs generally eliminate the need to name it. However, I’d just call it uncons.

                                                                        In Clojure, you can write (if-some [[head & tail] seq] …), but if I recall correctly, the underlying implementation still always uses first/next. Haskell fails better here with the colon/cons syntax in patterns, but trades a sequence abstraction for a concrete list type (at least without various extensions).

                                                                        1. 1

                                                                          I’m thinking Scala here, and I’d prefer to avoid the match/case syntax entirely if possible since it’s very easy to use unsafely.

                                                            1. 15

                                                              Okay so this is a bit of a rant but hear me out.

                                                              I’ve been writing primarily Clojure for several years now. In the Clojure universe there are some de-facto conventions for the significance of very brief variable names. For instance m denotes a map, and col or seq a list type or other sequence source. This works out pretty well for all the reasons that John outlines here - my programs in Clojure are written against an interface which is conventionally named by a truncated name, not a concrete type like clojure.lang.AFunction or java.util.Map. The single letter variable names wind up being binding for existential types… that is m is the sign by which I reference <T instanceof Associative, Iterable>.

                                                              At my day job, I work on an almost all Python team and have now repeatedly gotten into code review scuffles with my coworkers over my choice of variable names. Because I approach programming from this interface oriented perspective, the most meaningful names are short ones much in the same sense that John is after here which convey only the interfaces or other features which I need, rather than the longer names preferred by my team which seek to convey the entire context of the original use of the code I write.

                                                              The starkest contrast is, I think, in the use of classes and named class fields as opposed to functions. Functions are composable, single purpose things which you can reason about in isolation because all the relevant context is in the call stack and application. Class members especially in a context without strong static types are forced to use naming to fully distinguish the context and meaning of the value(s) they contain, which leads to long and task specific names in order to convey all the relevant context.

                                                              So yes I absolutely agree that long variable names are primarily used as a crutch in lieu of types, or due to high cyclomaitc complexity and consequent low reusability.

                                                              1. 9

                                                                IBM model M, dvorak. Nothing special.

                                                                1. 7

                                                                  IBM model M, dvorak.

                                                                  The most hipster of combination.

                                                                  1. 3

                                                                    I have a Unicomp model M-like keyboard, but with a dvorak layout. Love it.

                                                                    But… My favorite has to be a TypeMatrix 2030 USB, with a black US dvorak sleeve (the image below is a UK layout, but it doesn’t differ that much) Took less time to get used to than I thought, even with the Enter key in the center of the keyboard, and for me this thing just wins wins hands down.

                                                                    http://www.typematrix.com/shop/images/products/2030-skin-045-b-uk-dvorak-860x360.png

                                                                    1. 2

                                                                      Pfft–not even Colemak or Norman? Everyone has heard of Dvorak these days.

                                                                    2. 3

                                                                      Well at least @God has good taste in keyboards too.

                                                                      I use Das Keyboard (a model M derivative) at work and at home. Cherry blue switches, O-rings to limit bottom out impact and noise. I’m now on my 4th Das and have no intent to switch it up anytime soon. On the go I have a Leopold II (also cherry blues and o-rings). At home and mobile I use a gifiti wrist rest, workspace allowing. Only downside is that the wrist rests get seriously chewed up by watch clasps.

                                                                      1. 4

                                                                        You’re on your 4th Das? What are you doing to the poor things? :-)

                                                                        1. 4

                                                                          Fighting zombies and coworkers who break the build ;)

                                                                          Edit: a conjugation

                                                                      2. 3

                                                                        I used a HHKB Pro2 for a while, as well as the Lite version, and now I’m back to my Model M. Still my favourite keyboard of all time. And at this point, I’ve lost count of how many I’ve tried.

                                                                        Oh, and the main reason I’m back to using it is my office is now sufficiently far away from the bedroom so my wife can actually sleep when I’m typing.

                                                                        1. 3

                                                                          IBM model M, ps2->usb adapter

                                                                          Arguably best keyboard ever made, still available for dirt cheap

                                                                        1. 4

                                                                          macOS Sierra’s native full screen apps/Spaces & Spectacle for basic window management on non-fullscreen spaces.

                                                                          1. 1

                                                                            Yeah +1 to Spectacle. On my linux machines at home I run xmonad + xmobar, after switching from Awesome and when I got my work macbook the lack of tiling support drove me absolutely nuts. Spectacle isn’t a full replacement by any means, but it beats the heck out of the built in window manager.

                                                                          1. 4

                                                                            Now to port Genera to it?

                                                                            1. 6

                                                                              I posted PreScheme link in page’s, comment section in case that helps author out. It was a C replacement used to build verified Scheme48. I figure eventually a LISPer in systems programming or embedded will find something about it useful for them, too. :)

                                                                              1. 2

                                                                                Ooh nice! I hadn’t heard of this before!

                                                                                1. 3

                                                                                  Link here too might as well:

                                                                                  https://en.wikipedia.org/wiki/PreScheme

                                                                                  The papers are pretty detailed each covering interesting or useful things. A consistency of usefulness that’s rare in academia. ;) Another you might like to look up is Carp LISP which has Rust-like, memory safety instead of GC. IIRC that is.

                                                                              2. 4

                                                                                Cool as that would be, there’s a major snag here. I haven’t looked at the uLisp implementation but the AVRs are a Harvard architecture. That is, they have two memory banks - one for program instructions and one for data. Projects like uLisp and the various forth interpreters targeting Arduinos are forced to forego compilation or JIT and any real hope of self-hosting ala Genera.

                                                                                I was looking at writing a lisp OS for my Arduino fleet a while back and came to the conclusion that you pretty much just wanted to use a different chip both for memory model and performance reasons.