Threads for jamestomasino

  1. 3

    Being too young to have used this kind of devices, I find them really fascinating and would be curious to know what it would looks like to work without a screen, only with a printer, to read emails, browse Gemini, etc.

    1. 7

      Old fart here. This was the extent of my connectivity until I went to college, though I used a mix of hardcopy (DECWriter, Teletype) and screens (ADM-3A, VT-52, Apple ][…)

      300 baud is slower than my full reading speed, but close enough that it feels comfortable to read along with. It’s kind of interesting to read stuff that’s magically appearing beneath your eyes. (But 110 baud is painful. The amazing sounds and aesthetics of the Teletype almost make up for it though.)

      BASIC, naturally, worked well with this medium. That’s why every statement has a line number: so you can replace or insert them easily. I don’t remember doing any text editing on dumb terminals (a la TECO or ed) but it sounds painful.

      I spent a lot of time on BBSs my senior year; they were a lot of fun, except for the frustration of busy signals. It felt kind of conversational … a conversation with the BBS, I mean, since it moved at a speed closer to speech.

      I’d never go back to this, though. Nostalgia aside, it’s so painfully slow and limited.

      1. 2

        I tried TECO, and it was terrible. Will never go back.

        On the other hand, I used the Unix version of QED on a daily basis for a few years, back in the day, and it was great. I used to switch between VI and QED, and use QED for editing jobs where VI would be just too painful. ED is like QED with most of the useful features removed – it isn’t worth using. I stopped using QED when I lost access to it.

        I still use the Unix shell and the CLI extensively. I recently rediscovered QED on github and installed it again on Linux, but I haven’t used it much since it’s no longer part of my habits.

        I haven’t read mail using the CLI since gmail came out, but you can still get the mail tool in Linux distro repos. I remember being quite productive with it.

        What made printing terminals cool was the general feel, as @snej describes, plus the fact that you had a permanent hardcopy of your terminal session that you could keep if it contained something worth going back to. Video terminals at the time were restricted to 24x80, so you could only see 24 lines of text at once, but you could look at 100s of lines of text on the paper coming out of your printing terminal. I had access to 1200 baud Decwriters way back then, much faster than the 300 baud or 110 baud that @snej mentions (but still slow).

        The slowness of printing terminals forces you to slow down your brain and be patient, and that was maybe a less stressful way of working than modern interfaces. I wasn’t being constantly being subjected to notifications and advertising and juggling hundreds of tabs and windows. No distractions because a printing terminal restricted you to doing one thing at a time. Nobody was writing million line programs back then, either. Computers had a lot less memory, code had to be smaller. At 66 lines per page, a million lines is 30 reams of paper at 500 pages per ream. At 10 seconds per page (assuming 1200 baud), that’s 70 hours to print out the code base. Madness. Programs were much smaller, and could be understood by a single person. Tooling and hardware was more primitive (at least for me, I didn’t have access to Smalltalk machines or Lisp machines), but languages and programs were much simpler and you worked at a slower pace. We didn’t have the complexity caused by the accretion of 50 years of legacy software layers piled on top of 1970’s Unix.

      2. 2

        You can simulate this experience on a UNIX machine by using export TERM=dumb or similar.

        I don’t know about browsing gemini. Do gemini pages require an addressable cursor?

        1. 2

          gemini would be well suited to dumb terminal output or line printer experiences, especially if your client numbered links in documents for easy selection. Hmmm… ideas!

          1. 1

            gemini would be well suited to dumb terminal output or line printer experiences, especially if your client numbered links in documents for easy selection.

            There’s a line-mode browser for Gemini, gmnln, available here: https://sr.ht/~sircmpwn/gmni/

            Somebody also wrote a Gemini plugin for the line-mode browser edbrowse: https://news.ycombinator.com/item?id=26116372

          2. 2

            I’ve build my own gemini/webclient which doesn’t need any adressable cursor. The output is displayed in less an you need to type the number next to a link to follow it.

            See https://notabug.org/ploum/offpunk/

        1. 1

          There’s so much depth to gopherspace. Glad you found your way there. I’d recommend phetch as another great client. Gophernicus is also a wonderful gopherd with active maintenance. I use motsognir myself on gopher.black. I hope you keep burrowing deeper. There’s a ton of content waiting for you.

          1. 3

            I went down this rabbit hole last week looking for a suitable VTuber set up for my role playing games. I don’t have much interest in the anime models, but I was hoping that I could find a basic setup that would enable me to create my next D&D character in Blender and use that via virtual camera in Roll20. Sadly linux support for the software is quite poor. VSeeFace has some Wine configuration hints, but I couldn’t get them to work. I found a really basic POC on Githhub that worked with my camera and tracked motion and blinks well, but I couldn’t load any 3rd party models into it, nor did it output to a virtual cam.

            I would love to explore this more, and I think there’s a great value in it for table-top virtual gaming. Hopefully Linux support will continue to grow.

            1. 1

              Does it have any git hooks in place to prevent someone accidentally pushing dura branches to the remote? There’s many, MANY git worflows out there and it seems likely that someone has a “git push –all” that’s going to lead to an absolute mess without a little safety check.

              1. 1

                No. You should create an issue for it so we can generate ideas on how to do this!

              1. 1

                I tend to use functions when I need to execute within the current shell scope. If you’re going out of your way to use a subshell it’s just easier to write a shell script instead. Do you have a reason to pick a function using subshells over a shell script?

                1. 5

                  A separate script is a much more complete boundary. This of course has its pros (better defined interface) but it also has its cons:

                  • You need to explicitly pass all arguments.
                    • Sometimes it is nice to just have the log level you parsed automatically be available to the subshell.
                    • You need to serialize and deserialize any data you need.
                  • A lot more overhead. Fork is pretty cheap compared to starting a fresh interpreter and executing a new script.
                  • For small scripts it can be easier just to see the script inline than wonder what the alternate script is doing.
                1. 5

                  Lovely blog! The smolnet folks on gemini and gopher are applauding you with ascii claps.

                  1. 2

                    Although this was pretty centered on development the lessons translate to other things. During my time in college I made time to improve my reading speed through practice and technique. The dividends paid back have been incredible. I find it’s usually the inputs and the outputs I can most easily improve through simple technique. The middle part, which happens between your ears, takes a bit more time to develop. Anyway, overall very good sentiments, especially one bit:

                    Or do 5x as much and go home after lunch every day.

                    Make sure you’re speeding up for a good reason. Work is infinite. It will never be “done”. Learn when to take breaks and use your speed wisely.

                    1. 1

                      Ahh, yes! I was going to try and make this with unicode box-drawing symbols but I hit a lack of diagonals of a certain style. Scale it up and use ascii characters! A simple solution.

                      1. 2

                        I was considering using the mathematical falling diagonal character, (‘⟋’, U+27CB) instead of the box drawings light diagonal, (‘╱’, U+2571), but it didn’t seem to make enough of a difference, to skip the extra line needed, so I decided to stick with the box drawing set.

                      1. 1

                        It seems the code to do this is from this twitter thread, https://nitter.net/lunasorcery/status/1334519572330909696

                        1. 2

                          I used her gist and did my own weechat logs. As a result, here’s a little repo with a requirement.txt and info on how to easily do your own. Really the python just reads a file of date/time stamps, one-per-line. If you can transform your source data into that then this will Just Work™

                          https://tildegit.org/tomasino/weechat-plot

                        1. 2

                          Very nice solution. I also find it useful to have a default help task, but I’ve been manually building them. This seems easier to manage. I did tweak one small thing. In the sed, I inserted a pipe into \1|\3 and then told column to -s '|'. I like having more than one word in my task descriptions. I could use a comma, but I use those in descriptions too. Pipe seemed easy, but whatever character you don’t use often would be fine.

                          1. 2

                            Fun fact, in the Debian bug report, the author of pygopherd (John)!states that he is working on a python3 version. I hope it adds ipv6 support as well.

                            1. 2

                              I believe on the gopher mailing list the author stated ongoing frustrations with the upgrade to python3 and was seeking a new maintainer of the code-base. There were several replies and other packages discussed adding support for some of the specific older specialty types/formats (.link & .cap perhaps?) that pygopherd was handling. Geomyidae (from the bitreich community) and bucktooth (floodgap) were mentioned, but I haven’t followed up on progress.

                              If you’re not married to pygopherd specifically, you might try gophernicus, bucktooth, geomyidae, or motsognir instead.

                            1. 4

                              It mentions it’s inspired by z and z.lua. There’s also jump or autojump, or twenty more if you search a bit. “It’s in Rust” is the big helper here, I suppose. Jumping to directories is never something I’ve found noticably slow even with z, but there’s always an audience trying to eek out the best performance.

                              1. 10

                                zoxide’s README says it’s mainly trying to speed up displaying the prompt, as opposed to speeding up the actual jumping. All z-like tools delay the prompt a bit because they run code to track the current directory before each prompt.

                                1. 2

                                  Ahh, that makes more sense! I used to run some pre-prompt display stuff to get git statuses added into my prompt and that definitely slowed things down. Now I just run a ‘git status’ when I actually want that info and things are snappier. Every little bit adds up on those things.

                                  1. 4

                                    If you’d still like to do that (and use zsh), check out powerlevel10k. You can have your cake and eat it too…

                              1. 3

                                Blog author here. It looks like my little intro got shared on HN and here. There’s a lot more power to recutils than the bits I showed in the post. I recommend you look deeper if the format appeals to you. For you emacs users out there, it integrates well with org-mode too.

                                1. 2

                                  My personal hub is https://tomasino.org (static HTML/CSS) which links to all my other sites. It’s designed to mimic a markdown file in design, but it itself is not. It implements a few indieweb features, avoids JS, loads quickly and cleanly in lynx, and overall just gets the job done. Here’s a few notable things of mine:

                                  1. 7

                                    https://duncan.bayne.id.au/

                                    gopher://eyeblea.ch:70/1/~duncanb

                                    The website is a pretty straightforward static site. Built with Jekyll, served out of S3 through CloudFront, and with the bare minimum of styling necessary to work cleanly on mobile and desktop.

                                    The Gopher hole is … less straightforward :) It’s my personal place on an experiment to see if I can drive adoption of Gopher and (system-local) mail, Usenet, and IRC as an alternative to conventional social media here in Melbourne (Australia).

                                    1. 2

                                      I’m always happy to stumble on new gopher holes. We’ve got a decent number of aussie gopherites hanging out with us in the tildeverse and on SDF. Come by IRC sometimes and connect with us in #gopher.

                                    1. 3

                                      I’ve been doing this for years and it’s wonderful. I’ve left banks and other services when it quickly became apparent who was selling my contact information to spammers. On the technical side, a solution like simplelogin might very well make this more accessible to non-technical folks. I started with the unique email per site process when I was self-hosting my email, where it was easiest to get just the way I wanted it. Once I moved to hosted providers I’ve had to be careful that they supported all my needs. Tuffmail, Neomailbox, and Fastmail have all proven reliable. Some services limit or charge for aliases, which rules out their use.

                                      While the + tag trick works for simple cases, I don’t like it reveals your true email address to anyone familiar with the syntax. It’s too easy for someone to parse it and spam your main account. Fastmail’s alternative form using subdomains is more interesting.

                                      1. 3

                                        I’m with you. It takes some discipline to keep things in order but this seems to be one of the least terrible approaches. I use a ring model for managing my email world:

                                        • I have an single email address that is for meatspace use only
                                        • I use per-site emails on my domain for services I want to read emails from
                                        • I consolidate behind burner addresses - {shop, bills, burner}@ - for emails I don’t care about seeing. These are shunted into archive folders I never look at but can grep if I truly cared. And if the noise gets too intense, I turn them off entirely
                                        • For things I trust the least, I have a pseudonymous domain that is completely disconnected from my online identity

                                        Fastmail makes managing this almost seamless - I get one inbox with things I care about and some folders that automatically capture and hide the dross. The most overhead I get is logging in to blacklist a per-site email because they’ve become naughty.

                                        1. 3

                                          While the + tag trick works for simple cases, I don’t like it reveals your true email address to anyone familiar with the syntax. It’s too easy for someone to parse it and spam your main account.

                                          What I do is filter out any email sent to the bare/true address. i.e. for me to see an email (under normal circumstances), it must be sent a tagged email address.

                                          1. 1

                                            That’s a really good idea and a super-easy way to negate most of the negatives of that method.

                                        1. 3

                                          Is there some tmux trigger key that doesn’t interfere with readline? Maybe this is a bad habit on my part, but I frequently use C-a to go to the beginning of the line. This is mostly why I’ve been hesitant to use tmux. Similarly with C-b to go back a single character, though I use that much less frequently.

                                          1. 3

                                            I use the backtick (`) character. I unbind C-b, bind backtick, and then set a double-backtick to produce a literal one. The character comes up infrequently for me, and double-tapping it to make a literal one isn’t much of a challenge when it happens. The key position is close to the escape key, which I enjoy as a vim regular. (I also rebind most movement keys to vim-like as well)

                                            Here’s the code that sets my leader

                                            1. 2

                                              You’ll get a ton of different answers here, but I like M-a

                                              1. 2

                                                I’ve been using screen and then tmux with the same keybindings, and typing C-a a to go to the start of a line is now second nature to me. So much so that I get tripped up outside tmux

                                                1. 2

                                                  I’ve been using ctrl-v in both screen and tmux for as long as I can remember for exactly this reason. Ctrl-v is only rarely used (it’s for inserting a literal control character).

                                                  1. 2

                                                    I use C-o but it could be that it only makes sense with Dvorak as keyboard layout. On the other hand I tend to always have both hands on the keyboard.

                                                    1. 2

                                                      I use C-z.

                                                      There’s a huge discussion of that in this superuser question.

                                                      1. 2

                                                        This SU question may be related: https://superuser.com/q/74492/18192

                                                      1. 1

                                                        I’ve done some creative substitution work to create good file-based targets in the past and take advantage of Make’s laziness. The sentinel file, though, is a very nice hack that gets some of the benefit without much work. I appreciate that tid-bit. As for the rest, I agree with the others that going all in on GNU and Bash can be helpful, but it cuts down on portability. People still do run BSD systems, after all.

                                                        1. 2

                                                          Writing this post I was made aware of the empty target, which I now think is the traditional name for this pattern. Though it’s not documented as being useful for rules that output multiple files.

                                                        1. 1

                                                          I like how you carefully slotted in a potential solution with graceful degradation. I also hope some of the popular gopher clients pick up on this.

                                                          1. 25

                                                            Merging instead of rebasing doesn’t save you from creating a bad merge commit either, with or without merge conflicts. Whether you rebase or you merge, the final commit on top (i.e. the final snapshot of your files) will be in the same state. You’d get the same merge conflicts whether you rebase or you merge (possibly in a different order). It’s still your responsibility either way to make sure this commit is semantically correct – git doesn’t know your programming language and line-oriented diffs, whether by merging or by rebasing, can be wrong.

                                                            The article is also making the case that having the merge commit instead indicates that a whole batch of commits introduced a bug. You can just look at the merge commits and know which merge commit was the problem. This still doesn’t indicate which commit in the batch was buggy. So you have the same problem, except it’s more swept under the rug. Throwing out a whole series of commits because of one bad commit in the batch seems like too much baby bathwater to me.

                                                            1. 12

                                                              Another reason I stay away from merge commits is the train track wreck graph of history. Try running git log --graph --oneline on any google project (chromium, etc) and trying to sort out the history visually. Often times I find the tracks completely fill up my terminal and I have to scroll to the right in order to see what the commits are.

                                                              1. 7

                                                                Merging instead of rebasing doesn’t save you from creating a bad merge commit either, with or without merge conflicts. Whether you rebase or you merge, the final commit on top (i.e. the final snapshot of your files) will be in the same state.

                                                                That’s correct but it does make the merge point more obvious and explicit, which if the author is to be believed, makes it easier to untangle subtle errors of the type under discussion.

                                                                I don’t buy it though - I’ve been using Git as a release engineer for years and as an IC for years more and I can think of maybe 1 instance where such a bug was introduced but not immediately caught by the developer doing the merging.

                                                                That said one person’s experience does not make a thing true, so I’d be curious as to whether others have been bitten hard by this kind of subtle rebase induced bug?

                                                                1. 4

                                                                  I’ve had similar problems. Write some code, thoroughly test it, merge and commit. Later a problem is discovered. Didn’t I test for this? Unfortunately it’s not possible to recreate the exact artifact that was previously tested.

                                                                  Features interfere in complex ways. After rebase you can no longer untangle this feature from all the features in its new base.

                                                                  1. 3

                                                                    Unfortunately it’s not possible to recreate the exact artifact that was previously tested.

                                                                    This in no way contradicts your point, which I appreciate you chiming in with - but best practice with Git whenever you want to freeze a point in time is to use tags.

                                                                  2. 0

                                                                    My experience has been very close to this author’s. I’ve also run into issues with git blame when a feature is rebased, effectively hiding the true author of the code. Over my many years of git use in various sized organizations I’ve come to join Paul Stadig’s philosophy of “Thou Shall Not Lie”.

                                                                    It’s easy to customize a git worflow to speed along deployments and releases. Git’s first and most important role is the safety-net. Keeping your history accurate is the safest way to keep that safety-net strong.

                                                                    If you have challenges with git log when you have many branches, there are plenty of tools to help visualize that. Monorepos also make all of this much more complicated (i’m not a fan).

                                                                    1. 8

                                                                      I’ve also run into issues with git blame when a feature is rebased, effectively hiding the true author of the code

                                                                      Rebasing a feature branch doesn’t hide its author.

                                                                      1. 0

                                                                        It can when the history is rewritten by another user, especially when squashing. Squashing commits leaves only the author of the base, hiding both the other contributors and the identify of the squasher.

                                                                        1. 2

                                                                          Nope. Squashing a feature branch on a base only allows you to squash the commits of the feature branch into one another. It doesn’t allow you to squash them into the base commit. To lose commit authorship information, you’d need to very deliberately go outside normal rebasing commands like git rebase master.

                                                                          1. 1

                                                                            Squashing multiple commits into one is exactly what I’m decribing. If a feature branch has commits from multiple authors then when it is squashed to one commit only one author is listed. Whatever the “base” is that you’ve squashed down to is the remaining author unless you override it.

                                                                            More importantly, the purpose of my comment is to answer the above comment’s question “… I’d be curious as to whether others have been bitten hard by this kind of subtle rebase induced bug?” My answer is yes, and now I avoid the situation entirely by the choice to merge and keep accurate history.

                                                                            1. 1

                                                                              If a feature branch has commits from multiple authors

                                                                              Is this a regular occurrence? It certainly isn’t in any team I’ve ever worked in. If you regularly have to deal with feature branches where multiple people are committing, that points to a different issue: the team isn’t breaking up their work properly into discrete chunks.

                                                                              the purpose of my comment is to answer the above comment’s question “… I’d be curious as to whether others have been bitten hard by this kind of subtle rebase induced bug?”

                                                                              I think you really answered a different question; imho the original question was about when a bug might have been introduced, not who might have committed it:

                                                                              it does make the merge point more obvious and explicit, which if the author is to be believed, makes it easier to untangle subtle errors of the type under discussion.

                                                                              1. 1

                                                                                Is this a regular occurrence? It certainly isn’t in any team I’ve ever worked in.

                                                                                We all work in different environments and on different types of projects. Yes, this happens quite frequently in my industry.

                                                                                The awesome power of git is that we can do things in different ways to meet our individual needs. There doesn’t have to be a single “right” way.