1. 13

    I am going through and rewriting go programs in rust as they have a much more expressive type system along with scheme style macros…

    edit: EVERYTHING AFTER THIS IS OFFTOPIC: but whooo boy am I considering just going back to C++. The compile times are astronomical, which make me not want to compile or run my test programs.

    Also, I haven’t figured out why I struggle with the sheer volume of vanity types (aka type Bar = GenericArray). My current struggle is with the AES crypto libraries using GenericArray with a Nonce type, but somehow I can’t figure out how to write a function that returns a correctly typed Nonce! And the compiler keeps suggesting types that are longer than 200 chars at this point. I just figured out strings, and now arrays, vectors, and slices of things are some new confusion.

    Finally, rust documentation organization is completely baffling for me to navigate right now. It’s very hard to figure out “given type A, what are all the methods it has?”

    Note: I complain because I’m excited and want to use rust, and the few programs I have written to parse my org files have ended up with easy to reason code and only slightly slower than my old C code, and 3x faster than my Go code.

    1. 11

      You should look at Zig. It’s beautiful, though not yet ready for prime time.

      1. 10

        Finally, rust documentation organization is completely baffling for me to navigate right now. It’s very hard to figure out “given type A, what are all the methods it has?”

        I find the official book equally annoying. I think it’s been revised but back when I started learning Rust a few years ago it was incredibly frustrating.

        Virtually every section consists of 1-5 “we could do it like this, but this thankfully won’t compile because it might have this bug” examples, followed by the correct program listing. Just tell me how to do it already. Every compiler rejects an infinite amount of programs (okay, maybe except for Perl), I don’t need to look at every one of them.

        If you want to illustrate how the compiler usefully rejects potentially buggy programs – a great idea! – list the rules and explain them after you’ve explained the correct approach, so that at least I have a working reference to compare the rules to. And then by all means list a bunch of examples, after you’ve explained the logic that the runtime or the compiler employs. Don’t make me try to figure out what’s wrong with a program written in a language that I don’t know yet, and don’t bury the useful, two-sentence description of the compiler’s logic in three pages of broken sample code and snark about C++.

        1. 6

          Try Ada 2012. It compiles to native binaries quickly and feels like a Pascal version of a safer and simpler C++, there’s a lot of underlying conceptual familiarity so you’ll feel at home. The type system is also ludicrously powerful. I’m rewriting some tooling in it and it’s pretty easy since there’s a lot of functionality between the standard library and GNATcoll.

          1. 4

            I’ve been thinking a bit about trying to make a language that takes the best bits of Ada SPARK (nice integration with Why3 for great SMT reasoning, ability to “turn-off” language features, nice bare-metal ergonomics etc…), building off of the work around stacked borrows applied to Rust, and generally trying to lean more into linear + temporal + separation logic. Ada and Rust have shown that some amazing things are possible. This effort might “collapse” into tooling around Rust that tries to prove contracts specified in comments + additional clippy lints that disable more features as well as those in dependencies.


              linear + temporal + separation logic

              I’ve dived heavily into the Ada 2012 side, but not too much into SPARK, and there’s a lot of things people aren’t familiar with in regular Ada. You can force stateless behavior of a package using with Pure to be able to safely write your code with a compiler-enforce “functional core, imperative shell”, and mutable behavior is contained by parameters being readonly unless marked as out. On the temporal side, tasks contain active concurrent behavior and coordinating behavior and use protected types for passive concurrent behavior (mutual exclusion). For separation, Ada uses storage pools tied to individual access types and you can create multiple access types for the same type with each having its own pool. I haven’t played much with this yet, but access types are typed and you can’t cross access types from different pools, so I’m curious how far people could push this.


                You might like to take a look at F*. It looks quite like F# (i.e. ML), but integrates with Z3 and allows you to express programs and proofs in the same language. The output is C code and so it can be deployed on any system with a C compiler.

              2. 2

                Someone mentioned Ada 2012 here the other day and on their recommendation I bought the Ada 2012 book and started playing around with it.

                I like what I’ve seen of the language (I remember working with Ada 83 in college for some reason, a PLT class or something, but I didn’t see enough of it to say I “know” it or anything).

                The problem, of course, is the rather slim ecosystem and, more troubling to me, the tooling seems a little wonky. Maybe I’m not doing it right but getting things working on my Mac wasn’t nearly as easy as Rust or Go or Zig. I am 100% sure I have no idea what I’m doing so correct me if I’m wrong.

                1. 4

                  I havent tried Mac, but Linux and Windows have worked well for me. Grab GNAT Community edition, most of what you need is built in if you look under project “dependencies”, but theres a project called Alire, which aims to be a package manager. Theres a silly amount of stuff in those bundled libraries already: json parsing, sockets, regular expressions, containers and such, which has made me pick it for projects I would have done in C++. I had to rebind a bunch of keybindings in the IDE to be sane for me, and you have to treat tab as “I trust my ide to indent” and use Pretty Print (i rebound it, if I get time I’ll see if i can contribute “format on save” or something back to the ide) to just fix things up for you.

                  It was slow getting started, but my velocity is steadily going up rather than falling off. If this wasn’t true, I wouldnt be recommending it.

              3. 7

                am I considering just going back to C++. The compile times are astronomical, which make me not want to compile or run my test programs.

                That’s strange, Rust shouldn’t compile significantly slower than C++ most of the time (*). Have you tried figuring out why it’s so slow in your case? cargo build -Ztimings and cargo-llvm-lines are helpful tools here.

                Less specifically, but I personally found that writing simple, “C with classes” Rust makes the language much more enjoyable. A rule of thumb is avoiding dependencies that require using traits or macros. Especially avoiding something that visibly stretches the language to its limits (HLists and GenericArray are big red flags). And yeah, in this case the advice clashes with “don’t roll your own crypto”, which obviously is more important.

                (*) the case where Rust does compile slowly than C++ is when you build an enormous project on a machine with a huge number of cores — typical Rust build is not as parallel as typical C++ build.

                1. 4

                  I’d be super interested in a comprehensive tutorial/guide on “How to quickly write slow but barely passable Rust”, aimed at complete newbies who didn’t read/internalize The Rust Book. I.e. something that would give me a minimal subset of Rust that I could use to write any Turing-complete stuff, yet not in a most efficient or pretty way possible. E.g. maybe using RefCells (or what’s the name) everywhere, to mimic GC-like attitude. I heard this advice “just use RefCells for everything if you’re a beginner and don’t care about performance”, the problem is I tried but I don’t really have a slightest idea how to do that :( Currently, I’d have to first learn whole Rust (which I repeatedly failed to do), to then understand what and how I can drop for less efficient coding. I’d love a guide teaching me “pidgin Rust”, enough of it that I could be dangerous and scorned with disgust by Rust gurus. Yet able to get the job done, in Rust (and then slowly start advancing into more fancy stuff).


                    You can use Rust while mostly ignoring generics, macros, overloading, and fancy zero-copy code, but I don’t think you’ll get far without internalizing ownership.

                    Rc<RefCell> is not a reasonable GC replacement. It’s fine if you want a graph structure, but spamming it everywhere is not going to make Rust a Python. You’ll be just as likely get stuck fighting with temporaries returned by borrow().

                    I think that properly knowing the difference between String and &str (and alike) is a fundamental requirement in Rust, and it’s better to learn it properly than invent workarounds.

                    If you need a compiled language to get something done quickly, then golang is fine.

                    1. 3

                      This is something I am unsure about. I too often here advice about Rc RefCelling your way out of borrowchecker fights. I think this is a bad advice, but this is just my own reasoning, rather than empirical observation on many different people learning Rust. To me, interior mutability is one of the hardest aspects of the language, and, while it sometimes can be used to mimic some GC language patterns, it brings a lot of accidental complexity. It’s much harder than doing the Rust thing with owned values and references.

                      To me, this advice seems a bit like “you can embed imperative language in Haskell via state monad, so, to learn haskell, just use state monad”.

                      My own take on learning Rust is that one really needs to sit down and learn the inner logic of the language. Not sure what’s the best way to do this though, I’d say:


                        I must check out the Programming Rust book (I assume you mean O’Reilly) - the ToC looks solid; thanks for the recommendation!

                  2. 1

                    I think the issues you mention about rust are things that with experience you’ll struggle a lot less with over time. Compile times are a cultural issue in the ecosystem, but you can write code that compiles without spending minutes and minutes waiting on deps. “Experts write baby code” and when you learn which features of rust you don’t need at all, the whole language feels a lot nicer - until you have to read or use other people’s rust lol…

                  1. 11

                    There is also this less specialized tool:

                    column --table /etc/fstab
                    1. 5

                      But that also spreads the fields in the comments out, right?

                      1. 3

                        Ah, good point!

                        Maybe worth mentioning. I was kind of wondering what I was forgetting. I think I removed the comments I had because of this.

                        Congratulations with a new tool!

                        1. 2
                           gawk '/^#/ { print $0 } !/^#/ { print $0 | "column --table" }' < /etc/fstab

                          This will still group your comments up above your fstab entries, so I think this specific tool (or really any real programming language) will be required to do something like this as you’d need to track where lines started and update the table width.

                          1. 3

                            This picked my curiosity, could it be solved using tools installed by default on my system?

                            I managed to get this, but probably a shorter solution without perl is possible, if one is willing to write some loops in bash:

                            $ perl -wnle 'BEGIN{open($fmt, "<&", 5) or die("cannot");}; print, next if /^#/; next if /^\s*$/; $l=<$fmt>; chomp $l; print $l;' fstab 5< <(grep -v '^#' fstab|column -t)
                            # Static information about the filesystems.
                            # See fstab(5) for details.
                            # <file system> <dir> <type> <options> <dump> <pass>
                            # /dev/nvme0n1p2 LABEL=root
                            UUID=2bb3c21b-dc8f-401e-991b-66afd7301cb7  /      xfs   rw,relatime,inode64,logbufs=8,logbsize=32k,noquota                                                         0  1
                            # /dev/nvme0n1p1 LABEL=boot
                            UUID=1815-DD5D                             /boot  vfat  rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro  0  2
                            1. 7

                              picked my curiosity

                              FYI, it’s piqued – the more you know!

                        2. 1

                          Shouldn’t it? I think it makes more sense for the field headers to be spread out too, indicating their associated columns directly. That’s how I format my fstab when I do it manually.

                          1. 2

                            That could actually be nice, but surely you don’t wish to spread out the rest of the comments?

                      1. 4

                        C’mon, these are great books even for the entry level grunt.

                        And I struggle with “An Elegant Puzzle,” it is not a book I would recommend.

                        1. 1

                          That’s fair. Wasn’t sure if the “staff engineering” label made sense, but I went with it because

                          • These books generally aren’t about actually coding
                          • When I was getting started, the most useful books I found were the specifics of writing code (for Rails, HTML, CSS, etc)

                          But agreed that these are probably great choices for anybody. Updated some wording in the article to account for this.

                        1. 5

                          I’ve taken a slightly different tact, how can I take notes and share them as a typed document the fastest?

                          • A paper notepad with a pen/pencil, and a mobile phone with a scanning function.
                          • Eink tablet e.g. remarkable
                          • Color tablet e.g. apple ipad

                          The paper notepad option has been by FAR the most successful, after trying all of them. Especially recently with coronavirus wfh options at my work, I’ve been taking hand written notes during meetings. I save them and share them through a google doc for work. It takes seconds to go from written note -> email’d text using the google drive scanning function on my phone. The writing experience is incredible (because it’s writing)

                          The Eink tablet I just spent time fiddling with it, openning the “right” template, getting the cloud syncing working the way I like, re-uploading to google drive for sharing (which now I need to open the phone app anyways to do?), then again transcription to a document. The writing experience was very good, though I don’t like how slow eink is for flipping between pages of notes.

                          The color tablet was a complete shit show, even though it had the most superior software. While flipping pages etc was faster, the writing experience was so bad I almost never used it.

                          Only caveat of the eink tablet - annotating someone else’s typed paper is much better as the root document isn’t part of your scanned notes. This is actually very rare for me as all collaboration on documents occurs within google docs for work, so the hand written notes need to be re-transcribed if I go this route.

                          1. 2

                            I definitely agree with the color tablet analysis. From my brief usage of an iPad as a note taking device, it was absolutely terrible – nothing like writing by hand.

                            I spent the first half of this school year taking notes by hand and then scanning them (and then the reMarkable arrived), so I’m not a stranger to the workflow. It seems like you do a lot of work in Google’s suite, so our workflow differs there – I mostly just write and submit, which the rM is quite good at, rather than working collaboratively.

                            Ultimately, it’s whatever works best for you – I’ve definitely thought about getting one of the Evernote books that has enhanced scanning, but I’m definitely going to be a reMarkable guy for the foreseeable future.

                          1. 5

                            A lot of these examples work well when the code fits in ~20 lines. In almost all code I write for work, functions are 100s of lines easily, for ~reasons~. They start with error handling and bounds checks - which this article doesn’t address.

                            Once you’re up to 100’s of lines, I would get very pissed off if a coworker did not use early returns to rule out boring edge cases in the main function. The example sentence:

                            If you don’t get home early ignore everything I’d say after this sentence. Bring me a coffee.

                            Which seems weird, but instead if it was:

                            If don’t use our service, you should go back and read the front page! Anyways, <insert entire description of ToS>.

                            1. 4

                              “Stop me if you have heard this one…”

                            1. 5

                              I can’t recall who, but one of our fellow crustaceans made an offhand comment the other day that, when you use shell buffers in Emacs, it frees you from being tied to command completion/whatever in your shell itself and makes using alternative shells that much easier. The example they gave was the Tenth Edition/Plan 9 rc shell being fantastic to use in Emacs because of this.

                              I love rc but I never got the hang of Emacs.

                              Interestingly, the Amiga (I know, here I go again), put shells in “console windows”. Such windows were handled by the CON: or NEWCON: driver. There were alternative drivers available, like KingCON: that made things more like Emacs as well. I’ll have to dig up some of the documentation. It was neat.

                              1. 2

                                That was me because I’m a complete zealot about rc shell & emacs.

                                Your comment inspired me to figure out how to explain my set up better, I’m working on it.

                              1. 3

                                cscope still works well for me - I find it very easy to use, and I’ve used it on very large codebases. The indexing time is not significant.

                                I do not use completion frameworks, but that’s a personal choice.

                                1. 5

                                  I’ve been learning guitar for the past year and WFH has super-charged it. It’s been really nice developing a hobby that doesn’t require a computer. And it’s fun when I’m waiting on someone or some task at work to be able to pick up the guitar just play it for a little bit.

                                  1. 2

                                    I bought a guitar for covid! It’s been a lot of fun, I used to play music every day for a couple hours - but when I graduated college the opportunities for marching band & symphonic trombone were few and far between.

                                    Guitar is so accessible, and being able to play more popular songs from my childhood has really re-ignited my musical side.

                                  1. 11

                                    It’s amazing to go back and read the old Bell Labs papers on Alef, Dis, Limbo, Inferno, Plan 9, the Squeak and Newsqueak languages…and just see how prefigured Go was.

                                    (And then you go back and read about Oberon and Oberon-2 and then look at Bell Labs papers on Help and Acme and then Go and see how much Oberon inspired them too.)

                                    (And there’s even a hint…one might even say an iota…of APL in Go.)

                                    Go is the product of a long, long history in computer science, dating back 30+ years.

                                    1. 8

                                      To some degree, I agree.. but also this is because programming is a social task, and the same group of folks worked on all of these technologies, including Go. Even ken was involved in Go 1.

                                      1. 2

                                        To some degree, I agree.. but also this is because programming is a social task, and the same group of folks worked on all of these technologies, including Go. Even ken was involved in Go 1.

                                        right! And their experience includes ideas that they had in the past, that they… reapplied. I don’t see how the point you’re making is separate from the OP :)

                                        1. 3

                                          Saying something has a “long long history in computer science” implies it’s something across the entire field.

                                          ken and r are not the entire field. Yes their ideas have lasted longer than many trends - but it’s because they personally worked on those ideas and shipped products. It’s not like piles of others took up the idea post plan9 and wrote Go. The authors of plan 9 went and worked on Go.

                                          It’s basically an observation that ken’s career has lasted more than 30 years. Yes, it has. You wouldn’t say The Irishman is the product of a long long history in film because it related to Taxi Driver or Goodfellas. You’d say Scorsese has had a long career, and his work obviously relates to itself.

                                          1. 3

                                            Saying something has a “long long history in computer science” implies it’s something across the entire field.

                                            Saying something has a history within a field doesn’t imply that it’s the entire field. Any more than me saying a road exists in London means that road is London. Your examples are pretty inaccurate, it’s more accurate to say “Noir has a long history in film”, and, excusing my complete lack of film knowledge, my guess is that because certain directors will go for certain visual styles, you will end up with the same-ish directors working on noir-related media. Likewise you can see that within art, cubism has a long history within it. It will be mostly the same people, indeed! That does not imply that the entirety of cubism is relegated to those people. Likewise it does not follow that cubism is the sole product of all of artistry, or that cubism influenced everything!

                                            ken and r are not the entire field. Yes their ideas have lasted longer than many trends - but it’s because they personally worked on those ideas and shipped products. It’s not like piles of others took up the idea post plan9 and wrote Go. The authors of plan 9 went and worked on Go.

                                            That’s… seriously misunderstanding how ideas propagate within scientific and artistic fields.

                                            I’ve been reading up on computer history for about 10 years, I consider it part of computing, to know where things came from, to know about ideas that might have been lost. It’s blisteringly clear to anyone that’s spent even a month reading up on this stuff, how interrelated and interwoven the ideas and cultures and research interests at Bell Laborotories, Erricson, Parc, Sun Microsystems, MIT, Stanford, etc. were.

                                            It’s the nature of humans to communicate things we find interesting. We’re in a field to study and work in it because we find it interesting. It’s highly likely that if you’re part of a research project, you find it interesting. If it’s a research project, the other people won’t know it, thus you will naturally communicate it. Likewise you will try and stay abreast of other developments in that research field. You will see papers that other people have produced, think “oh, that seems similar!” or “oh, that’s interesting and different!”, and read them out of curiosity. Maybe you skim it, maybe that’s all the idea needs to percolate.

                                            It’s wrong to think of computing as these isolated workshops, each churning out different things. Each thing is communicated, and influences the other thing. Sometimes those influences aren’t realised, or are forgotten, a meeting in a pub that’s barely remembered, for example. But it happens naturally as that’s the nature of communication and information within science and art.

                                            A lot of Plan9 got turned into research papers (There’s a paper on Hume’s redesign and update of Make, which is where GNU gets many of the features from, likewise, mk(1) stole a bit from GNU Make too!), likewise, Inferno, Oberon, System V, Smalltalk-80, Scheme, etc. all had slews of research papers coming out from them.

                                            Or take a look at EMACS and the Lisp Machine. EMACS, originally written in TECO, rewritten in a Lisp, inspired a lot of other editing environments. Likewise, a lot of features from the Lisp Machine were ported into EMACS by the developers at the time. The Lisp Machine’s ideas, in some part, lives on inside EMACS, and thus in software that takes from EMACS, and vice versa.

                                            1. 2

                                              EDIT: thank you for your detailed response, I felt I really understood your point because you took the time to write it out so fully. I may disagree with some of your points but thank you for taking the time to explain them.

                                              I mean, I think the Lisp Machine is a great example in that it came out of MIT where TECO was also written? It doesn’t really fight this “workshop” thing you’re trying to argue against imo. TECO was written by Dan Murphy, who wrote it at MIT.. afterwards he worked on TENEX, eventually working at DEC and on the PDP-10… which had TECO installed on it at the MIT AI lab.. where rms worked.

                                              It’s ok that we interpret history differently, I guess I don’t see computing history the way you do. I see it almost exclusively as a social & engineering phenomenon, not an intellectual one. The software engineering field in general is a great example of this to me, where all the largest and most influential papers are from implementation papers (ala “backrub”, or MultiPaxos, or WAFL, etc). If plan9 and emacs are in the “computer science” field, then we have a fundamental disagreement about how technology is made, which probably won’t be a useful discussion.

                                        2. 2

                                          Yes, but a lot of Go comes from people who aren’t Pike or Thompson or anyone else from Bell Labs: CSP comes from Hoare, a lot of the type system and “type-bound procedures” come from Wirth and Mossenbock at ETH-Zurich. Help and Acme come from Wirth and Gutknecht. CSP took hold at Bell Labs pretty strongly (Pike has an essay on it) but it didn’t start there. Wirth and his associated never worked there.

                                          And, as I jokingly mentioned in my original comment, the use of iota in Go finds its inspiration all the way back to 1962 at IBM with the publication of A Programming Language.

                                          If anything, you could argue that Go was Pike and Thompson’s paean to Hoare, Wirth, and Mossenbock, with a wink at Iverson. :)

                                          1. 2

                                            Fun fact, iota is in Limbo as well!


                                            I didn’t know about the origin, thank you for this :)

                                        3. 4

                                          It really is fascinating to see!

                                          I went over a few of these languages and cross referenced them to Go in a prior post: https://seh.dev/go-legacy/

                                          Go looks so much like its predecessors

                                          1. 4

                                            Charles Forsyth gave a talk last year on this. Hopefully the video will show up on Youtube soon.

                                        1. 1

                                          I love rc for shell programming, but as an interactive shell (even in the various currently-existing forks) it’s long been overtaken in features. I’ve long wished for a shell I’ve informally called zrc: the syntax and programming features of rc with the interactive usability of zsh (or even better, the usability of fish). Some day, I hope!

                                          1. 5

                                            Have you considered using rc within something that is not a traditional terminal emulator?

                                            Most folks who use rc use it within acme. I personally use it within emacs. Once you are depending on a larger editing tool for things like completion, copy pasta, etc and not just readline, you care less about what you were missing from zsh.

                                            This wrapping environment is why a lot of folks don’t care to change much of rc’s interactive experience.

                                            1. 4

                                              If you’re ever so inclined, I imagine a video or blog post explaining your routine with rc in Emacs would go over well here. :)

                                          1. 7

                                            byron’s rc is a superior fork of Tom Duff’s rc shell. It’s always sad to me it’s called the plan 9 shell considering it was largely designed in the research unix v10 manual by td.

                                            Byron fixed some huge ergonomic annoyances of rc:

                                            • no more quoting = signs every where because of the plan 9 parser
                                            • proper else syntax, instead of this if not crap. Which is ironic considering byron is using the same hack that Go ended up using in it’s parser.
                                            • readline/libedit/etc support

                                            I use it as my full time interactive shell, but the real power comes from quoting. I’ve found compatibility with POSIX shell code to be largely not necessary.

                                            1. 4

                                              This is all what I use personally, as what I use at work is dictated by forces outside of my control.

                                              Fisher pen and paper for notetaking during meetings, while researching, watching videos, etc

                                              Stuff from there is then migrated to one of the following:

                                              Ulysses for writing blog posts/drafting emails/etc

                                              Notion as a wiki/knowledgebase

                                              Todoist as todo/chore chart/kanban

                                              1. 1

                                                YES i also use the fisher ag7. Love the damned thing. I don’t think the cartridges writes as well as some other pens, but I just enjoy the feel too much to stop writing with it.

                                              1. 18

                                                Rather than just saying “org-mode” like normal, here’s how I use it.

                                                I keep two files, one inbox and one as a “brain”. When anything new comes in, I use org-capture to capture that as a todo entry and store it in the inbox. I have two states: TODO, and EXPEDITE.

                                                My main file looks like this:

                                                * OKRs - my planned objectives; anywhere from 3-8 projects below this header
                                                * Unplanned Work - things that I did/need to do that aren't what my performance is measured on ;)
                                                * Tasks - work tasks that are one-off'ish ("security training", "get a new Yubikey", (..))
                                                * Personal - personal tasks that aren't work, but need to happen during the day

                                                First thing every morning, I:

                                                • Get to inbox.org zero: everything gets refiled into the appropriate place, with a scheduled date associated to it as needed. If I have a hard deadline, it gets that too.
                                                • Look at my org-agenda for the day: do I have anything scheduled or any hard deadlines? If not, I decide what to work on that day and pull them onto the agenda. Also, if I have any more than 1 task tagged EXPEDITE, it’s time for a conversation around urgency and priorities :)

                                                After a five minute break to get a coffee, I pick my first task and call org-pomodoro to clock in and work. After the bell dings I take a five minute break, then jot down some notes on what I worked on (if needed) and repeat.

                                                At the end of the day, I try to reserve ~30 to go through my email and handle that, inbox zero style. Anything that needs more than two minutes of thought is captured in my org-mode inbox for tomorrow morning.

                                                I like this because it’s a workflow that evolved to fit how I think and operate – but you will like org-mode because it can evolve to fit how you think and operate.

                                                1. 4

                                                  I don’t know what I’d do without org mode. It’s basically like markdown + jupyter notebooks + time tracking + outlining + presentation tool + diagram tool all in one human readable file. Add tramp-mode on top of it to unlock more power. I just wish I could better use it to collaborate with non-emacs loving team mates. As it is it has to remain a personal tool. Publish only.

                                                  1. 2

                                                    I would love to discuss and show our different systems to eachother.

                                                    I also use org-mode, using deft as something akin to a zettelkasten, and a gtd.org and work.org file for personal and work headings respectively. https://codemac.net/gtd.html is the current description, but I’m still writing it up.

                                                    1. 2

                                                      So my org-mode strategy is just a few capture templates, and a giant org note file of crap.

                                                      I’ve been trying out org-roam/gkroam to see how that might work for a lot of my one off “i have a thing to record” notes that I can eventually come back to later (via an org-habit to periodically look at things of course!).

                                                      Honestly org-mode is such an insane thing on its own, I’m curious how every emacs org user uses it, it seems so good at being adaptable to your specific flow that I doubt any two users will be similar, but guessing we could all use inspiration.

                                                      1. 2

                                                        it seems so good at being adaptable to your specific flow that I doubt any two users will be similar

                                                        This is exactly what I did wrong the first time I started using org mode: I found one of the awesome and exceptionally in-depth guides and followed it to the letter. But I never used the system it lead me to build because it wasn’t how my brain wanted to plan tasks.

                                                        The Lisp curse – everything is possible so everyone does it their way – strikes again ;)

                                                        1. 3

                                                          Lol yep, always gotta start from ground zero, not the top of the mountain. I know EXACTLY which guide I think you’re alluding to, and that things full of ideas, but my god is it more than I really need out of my notes. Tracking time in org is neat but i’ve never gotten any use out of it besides the agenda. Maybe the timers that act like pomodoro timers, even then i just use my phone generally.

                                                          1. 2

                                                            But it’s not a curse in this context! You should do your personal notes your own way, and org-mode lets you do exactly that.

                                                            The “curse” is that Lisp lets you write programs in a personally customized way, and in a shared codebase that can be a problem. Here, it works great.

                                                    1. 1

                                                      They missed the global level (not cluster level) data storage system.

                                                      Where many google properties store their bytes is not in Colossus directly, but through an internal version of Google Cloud Storage.

                                                      1. 21

                                                        Two things.

                                                        1. An Emacs for the web – browser primitives, but with hooks and definitions that allow full user control over the entire experience, integrated with a good extension language, to allow for exploratory development. Bonus points if it can be integrated into Emacs;

                                                        2. a full stack language development environment from hardware initialization to user interface that derives its principles (user transparency, hackability) from Smalltalk or LISP machines, instead of from the legacy of Unix.

                                                        1. 5

                                                          Nyxt maybe what you are looking for. More info here & here.

                                                          1. 1

                                                            Oooh, indeed. That is significantly closer to what I want.

                                                          2. 4

                                                            Re 2: Sounds like Mezzano https://github.com/froggey/mezzano apparently. Actually running on arbitrary hardware is even harder, of course, because all the hardware is always lying…

                                                            1. 1

                                                              That seems interesting!

                                                              Really, you’d bootstrap on QEMU or something, and then slowly slowly expand h/w support. If you did this, you could “publish” a hardened image as a unikernel, which would be the basis of a deployment story that is closer to modern.

                                                              ETA: I’m not sure I’d use Common Lisp as the language, but it’s certainly a worthwhile effort. The whole dream is something entirely bespoke that worked exactly as I want.

                                                              1. 3

                                                                Well, Mezzano does publish a Qemu image, judging from discussions in #lisp it is quite nice to inspect from within, and judging from the code it has drivers for some speicifc live hardware… A cautionary tale, of course, is that in Linux kernel most of the code is drivers…

                                                                1. 4

                                                                  Not something that Mezzano is currently trying to do afaik but there was a project, Vacietis to compile C to CL with the idea idea to be able to re-use BSD drivers that use the bus_dma API. From http://lisp-univ-etc.blogspot.com/2013/03/lisp-hackers-vladimir-sedach.html :

                                                                  Vacietis is actually the first step in the Common Lisp operating system project. I’d like to have a C runtime onto which I can port hardware drivers from OpenBSD with the minimal amount of hand coding

                                                            2. 3

                                                              #1 emacs forever.

                                                              1. 1

                                                                Would something like w3.el be a starting point for this, or are you envisioning something that doesn’t really fit with any existing elisp package?

                                                                1. 2

                                                                  Like, I’ve used w3 in the past, but I’m thinking more like xwidgets-webkit, which embeds a webkit instance in Emacs. I should start hacking on it in my copious free time.

                                                                  1. 1

                                                                    That makes a lot of sense. This makes me think of XEmacs of old, ISTR it had some of those widget integrations built in and accessible from elisp.

                                                                    Come to think of it, didn’t most of that functionality get folded into main line emacs?

                                                                    I love emacs, a little TOO much, which is why I went cold turkey 4-5 years back and re-embraced vi. That was the right choice for me, having nothing at all to do with emacs, and everything to do with the fact that it represents an infinitely deep bright shiny rabbit hole for me to be distracted by :)

                                                                    “If I can JUST get this helm-mode customization to work the way I want!” and then it’s 3 AM and I see that I’ve missed 3 text messages from my wife saying WHEN ARE YOU COMING TO BED ARE YOU INSANE? :)

                                                                    1. 2

                                                                      I feel seen. Yeah, I basically live in Emacs; it informs both of my answers above; basically, I want the explorability of Emacs writ large across the entirely of my computing.

                                                              1. 8

                                                                Using a notebook, and writing out what I’ve tried so far by hand.

                                                                The number of circles I can drive myself in without self reflection far outweigh any amazing gdb tricks I can paste here.

                                                                1. 1

                                                                  +1 to that. I treat it as some sort of scientific notebook and write down my hypotheses, experiments, results, etc.

                                                                1. 6

                                                                  This is one of the biggest advantages of mocks, that you don’t need their underlying type to be extensible, just their API stable.

                                                                  I don’t understand why it’s not “make your code interface stable, and add new interfaces rather than upgrade old” as opposed to “just don’t assume any code you call’s behavior will follow any assumptions at any time”.

                                                                  1. 1

                                                                    Make new interfaces but keep the old. One is silver, the other is gold.

                                                                  1. 7

                                                                    This doesn’t discuss the huge advantage protobufs have at google that may even be worth all the pain described: the amount of amazing tooling made is absolute insanity. There are storage formats, sql-like query engines, pretty printers abound, magical serializes that can read/write anything written in the future or past, … the list is much longer and more proprietary.

                                                                    The technical and social pressure to make sure your code works with this 10000 engineering years of tooling labor is much much bigger.

                                                                    Also they have an interesting claim:

                                                                    At the root of the problem is that Google conflates the meaning of data with its physical representation.

                                                                    I question anyone that doesn’t think at some point the rubber meets the road. Your encoded data has to have some meaning associated with it at some level.

                                                                    When you’re at Google scale, this sort of thing probably makes sense. After all, they have an internal tool that allows you to compare the finances behind programmer hours vs network utilization vs the cost to store x bytes vs all sorts of other things. Unlike most companies in the tech space, paying engineers is one of Google’s smallest expenses. Financially it makes sense for them to waste programmers’ time in order to shave off a few bytes.

                                                                    Paying engineers is actually not one of the smallest expenses (look at Google’s quarterly statements, stock-based comp alone is equivalent to ~50% of all capital costs..) and not the purpose of the tool. For all but INSANE amounts of engineering, it’s usually better to waste the network, cpu, disk, whathaveyou than software engineers.

                                                                    Finally, I hate protobufs as well and despise I use them in may day-to-day work, but this rant felt really lacking inside and out.

                                                                    1. 6

                                                                      I think the point is that protobuf may be the right choice for Google, but almost certainly the wrong choice everywhere else.

                                                                      1. 1

                                                                        magical serializes that can read/write anything written in the future or past,

                                                                        Show me a protobuf 3 decompiler that doesn’t depend on .NET

                                                                      1. 10

                                                                        Taking control of your personal knowledge is one of the greatest experiences I’ve had. I switched to emacs purely for Org-mode, stayed for the elisp in ~2006 or so.

                                                                        Plain text also has a wonderful property of being a super sturdy format you know you’ll be able to read. I’ve made all kinds of quick reports out of my org files because I can just run grep | sed | awk quickly, and then write up elisp if I want to keep it around longer.

                                                                        Unfortunately, it also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc users would balk at the steps you need to go through to get your notes where you’d like them. Dropbox is probably the only solution that works well for users, but then it’s dropbox. Note the auto-committing logic here makes it very difficult to actually use your backups as you’re fighting against ongoing commits running in the same workspace. I really need to write the emacs extension that allows me to use tramp w/arbitrary binary that handles data reading/writing.

                                                                        1. 3

                                                                          Taking control of your personal knowledge is one of the greatest experiences I’ve had

                                                                          I’m in the same boat. I strongly believe that personal knowledge, such as one’s personal notes, should be based on a future-proof system.

                                                                          Unfortunately, [plain-text note taking] also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc users would balk at the steps you need to go through to get your notes where you’d like them

                                                                          We may not be able to create as smooth an UX as all those mobile note taking apps, but I believe we should be able to come close enough. I currently dogfood my own app (called Cerveau based on the open-source neuron project) which allows me to edit git-backed plain text notes from web browser and mobile. It basically allows you to use Git(hub) as storage, while providing a nice editing and browsing interface.

                                                                          1. 2

                                                                            I’m in the same boat. I strongly believe that personal knowledge, such as one’s personal notes, should be based on a future-proof system.

                                                                            Strongly agree here.

                                                                            I manage my knowledge via a web-based system which uses text files as its base data storage format, which I can always zip, move, and adapt to a new system.

                                                                            1. 1

                                                                              That sounds interesting, what tool do you use?

                                                                              1. 1

                                                                                Thank you for your interest.

                                                                                I use a system called “hike” at this time, which you can see a demo of here: http://hike.qdb.us/

                                                                                The username and password are both admin, I use that to keep out crawl bots, because at this time they fill the system with lots of junk.

                                                                                It’s still a work in progress, and I think only useful to myself, but here are some general ideas:

                                                                                Textfiles are identified by their hashes.

                                                                                You can attach something an existing file using the >>hash format

                                                                                Hashtags are used for grouping and categorizing.

                                                                                A hash-tag in a parentless item is assigned to that item. However, if an item has a parent (using >>), the hashtag is assigned to the parent item.

                                                                                Let me know if you have any questions.

                                                                          2. 2

                                                                            I’m not sure how great plain text is compared to say sqlite. Faster tagging, full text search, one file, are all things that would make a note-system better in my eyes, than having free-format files lying around in a file system. And it’s not like data has to be plain-text for it to survive, sqlite and a lot of other formats with public domain/free software parsers are just as accessible, perhaps even more when considering how file-system unfriendly mobile devices are (often there’s not even a proper file manager).

                                                                            1. 1

                                                                              Doing this now and it’s great. Title and content of notes are text fields, tags and links are structured many-to-many relations. Keeping metadata out of the note contents means I don’t have to parse anything and querying notes is super easy.

                                                                              1. 1

                                                                                What tool do you use for that?

                                                                                1. 1

                                                                                  I wrote my own.

                                                                              2. 1

                                                                                SQLite is one of the more compatible formats, but I still can’t edit it by hand.

                                                                                My solution is to have a tree of plaintext files, which are then indexed into SQLite for indexing and searching.

                                                                                1. 1

                                                                                  This feels like putting the cart before the horse to me. If you want optimized search why not retain the power and recoverability of plain text and use sqlite for indexing and metadata storage?

                                                                                  1. 2

                                                                                    I don’t recognize any special power in plain text. You just need a tool to access the database, and it’s as good as plain text, when it comes to unix utilities. Plus you don’t have to bother with duplicate states and updating the indexing or metadata storage, since it’s all in one file.

                                                                                    1. 1

                                                                                      Your sqlite file gets corrupted - Game Over.

                                                                                      One text file gets corrupted? You lose whatever bytes from that one text file.

                                                                                      Every decision is a trade off between utility and convenience.

                                                                                      1. 2

                                                                                        Your sqlite file gets corrupted - Game Over.

                                                                                        Not necessarily, I mean first of all it’s easy to create a backup, and then there are tools to recover as much as possible. Sure you could engineer an attack to corrupt just the right bytes, but then I could just as well say “what if you run rm -f *.md.

                                                                                2. 2

                                                                                  Unfortunately, it also has the property of being a filesystem/local interface, and thus most mobile/ipad/etc

                                                                                  This is something that has held me back many times from using such system. In the past I did only use markdown notes in a single directory more like a journal (so no wiki and pretty messed) and syncing with Syncthing or Dropbox, which I believe is one of the best big providers for just plain text. And still editing on mobile was a pain back then, not sure if currently there’s any better interface to work on plain text.

                                                                                  When that started not filling my needs I tried Evernote and moved to Notion.so which I have been using since then. I’m open to explore alternative, which made me end up here looking at VimWiki which I could couple up with WebDAV to sync with my vps.

                                                                                  On a note: Notion has a nice export to Markdown and CSV, that almost nails it for me. Basically following up directory structures for the content you have in the platform.

                                                                                  1. 5

                                                                                    This is why Joplin fits my need perfectly.

                                                                                    • Syncs via WebDAV
                                                                                    • Great mobile apps
                                                                                    • Markdown everything.
                                                                                    • On desktop it has an option to spawn your favorite external editor of choice so Vim away!
                                                                                    • 100% open source

                                                                                    I’m in love. I haven’t ever felt this ‘together’ in terms of my personal and professional knowledge bases ever (no hyperbole).