1. 23

    I stopped signing stuff because I just couldn’t deal with gpg any more. At some point it broke for mysterious reasons I couldn’t figure out, and I just gave up. I’ve been wanting a better signing scheme for a long time.

    For anyone wanting to try it out:


    signingKey = ~/.ssh/id_ed25519
    format = ssh


    % git commit -am 'Sign me!' --gpg-sign
    % git log --format=raw

    commit 74d2eb36642937c31b096419fe882259572e42e3 tree 7a6a8614e03d217dea76f28edbc6652666932df8 parent 8c8db6f1bd0ec29cfffc1cf0d0f91f637e8fbd26 author Martin Tournoij martin@arp242.net 1635225059 +0800 committer Martin Tournoij martin@arp242.net 1635225059 +0800 gpgsig —–BEGIN SSH SIGNATURE—– U1NIU0lHAAAAAQAAADMAAAALc3NoLWVkMjU1MTkAAAAg6w5WB1nhvFYmOIc/hxLj2dkuME 4oQcQrLs1oQsRdZ68AAAADZ2l0AAAAAAAAAAZzaGE1MTIAAABTAAAAC3NzaC1lZDI1NTE5 AAAAQDPTXV5wPb0Yzt0VaVpk5/83TKw5MklAb0DkQkVT99Ib+MwaTIirb1kG1m54akzfn+ Bb3vV9YYRjjCHnie5ziwU= —–END SSH SIGNATURE—–

    Sign me!

    The gpg in a lot of the settings and flags is somewhat odd since you’re not using gpg at all, but I can see how it makes sense to group it there.

    It doesn’t really integrate well with GitHub, but that’s hardly surprising given that this feature is about five hours old 🙃

    1. 2

      You may find signify of interest. See this post from @tedu for elaboration.

      1. 2

        There’s also minisign, which has a few improvements. But can you use it with git (or email for that matter)? Because last time I checked you couldn’t. Nothing standing in the way in principle, but the tooling/integration just isn’t there.

          1. 1

            Ah nice, but it’s a bit too hacky for my taste to be honest 😅 Also somewhat hard and non-obvious to verify for other people (philosophical question: “if something is securely cryptographically signed but no one can verify it, then is it signed at all?”)

      2. 1

        Does it only support ed25519? I am still on rsa, btw, you format is screwed due to markdown.

        1. 2

          Presumably it supports all key types; it just calls external ssh binaries like with gpg. There’s no direct gpg or ssh integration in git (as in, it doesn’t link against libgpg or libssh) and leaves everything up to the external tools. It’s just that I have an ed25519 key.

          Looks like I forgot to indent some lines; can’t fix the formatting it now as it’s too late to edit 🤷

      1. 3

        Be careful to stay mindful of how radically different the work you do might be from other people.

        I use conda. I create a new environment for each data science project I work on, then delete those environments when that project is done. I sometimes use pip within those conda environments; conda supports that. I also use pip outside my conda environments, when the tool to be installed is a non-dependency to a specific project (and doesn’t already exist in my OS package manager, which is my strongly preferred way to install any system-wide application).

        Suppose I start a new GIS-related analysis and I want to create an isolated “virtual environment” and install shapely.

        Okay, so I’m now installing its dependency on GEOS, which is most definitely not a Python library. So, what’s the singular “right way” that every individual should install these tools in every circumstance? I’d suggest there is no such thing as a single “right way”.

        I want to do two things here that are important to me: install a C++ dependency to a Python library, and have it totally isolated into something functionally akin to a reproducible virtual environment that also leaves the rest of my projects (and system) alone.

        Now, I also want to share my Jupyter notebook analysis with my colleague so they can reproduce my results on their machine. I want to check one text file into my repo that fully specifies that environment, and I want my colleague to be able to create that environment with one command, on any operating system (including Windows and MacOS, since they’re not hip to the same development tools as me). And after they have reproduced my work, they won’t want any of my stuff on their system anymore. I want them to be able to delete it with one command, too. To really remove it from their system.

        We can kind of do all of this with pip, but… well… good luck to each of us on our competition for whose system goes longer before requiring a fresh OS reinstall. Heck, maybe my colleague is reproducing my results on MacOS, but I created the work on Linux. Say Apple pushes an update to Maps that somehow breaks my GIS project, and only my GIS project, and only on my colleague’s machine.

        I’m sure you’ll agree this will be a fun experience to resolve with my colleague. And this is the one on Mac, so at least I can start to theorize why it might break for them. That poor colleague using Windows is on their own.

        Now, on the other hand:

        conda is not how I would install any software in the context of needing some project-agnostic “user tool” on my local machine. I don’t want to have to activate an environment - even an automatic default one - to run shell commands I expect to just work. My tune might change if I switched to Windows, but I know nothing about that OS and would be reaching for the tool most familiar to me in order to become productive quickly, so I’m not a person with an informed opinion about that.

        And while deploying isn’t my area of expertise (so I might be speaking out of turn), this also is not how I would deploy any of my work into production. For example, I imagine installing GEOS via conda to work with PostGIS as part of the backend for a user-facing GIS platform is almost certainly the wrong way to be approaching the objective. This is where people are going to be looking at Nix and all the other things related to scalable and reproducible deployments.

        conda was created by people in the data science community that write a lot of Python that depends on a lot of non-Python. I don’t think Travis worked on it, but the people who did certainly work for him, and so you know they intimately understand this use case. It is very, very, very good for this use case. Not perfect, but awesomely useful. Poor Travis has probably been answering people’s “Python” questions about NumPy for 2 or 3 decades. Cheers to his group of folks for making my life easier at the same time as reducing their open source support burden.

        And to be perfectly clear, I don’t blame the people responsible for the Python packaging ecosystem that pip is a suboptimal solution when I need to write some Python code (which depends on some Python library, which has some numerical C or Fortran library deep in its dependency chain, all from one project that conflicts with my other project’s separate deep dependency tree of software spanning multiple languages and decades).

        In summary, I don’t think conda is for software engineers.

        1. 13

          That looks exactly like how I’d imagine an OpenBSD’s work space to look: Minimal, classy, entirely beautiful.

          My own desk is always a hopeless messy jumble, and has been my whole life. I don’t know when I realized that clutter was actively comfortable, but it is.

          1. 4

            I earnestly believe that this aesthetic simplicity & physical utilitarianism can be accessed with intentionality by learning to “think in OpenBSD” (similar to the titular character’s experience from the novel Ender’s Game, or as in the film Arrival); it could be something you or anyone else can aspire to emulate via your neuroplasticity and the daily effect of using this operating system.

            I’d love to see this studied as robustly as we’ve studied the effects of color in brand identities. I imagine OpenBSD to be the “IBM blue” of OS aesthetics.

            Whatever else you believe about OpenBSD and the alternative more popular desktop operating systems, it would be hard to dispute that OpenBSD disproportionately pursues a calm “internal coherency” given the project’s approach to normalized coding style, a smaller team of opinionated contributors, complete man pages distributed with the system as a single “source of truth”, the absence of bells and whistles that trigger the frailties of your mammalian brain, and an ethos approximating the original Thompson & Ritchie style Unix philosophy.

            1. 3

              Or you can be the person who, when someone asks “do you have X?”, reaches into a pile and unerringly finds it. Or sometimes says “No, but I know of something better.” It takes all sorts.

          1. 12

            And either the Rust standard library or possibly the Rust compiler–I’m not sure which–are smart enough to use a (slightly different) fixed-time calculation.

            That’s LLVM. It actually is surprisingly smart. If you have a sum for i from a to b, where the summation term is a polynomial of degree n, there exists a closed form expression for the summation, which would be a polynomial of degree n+1. LLVM can figure this out, so sums of squares, cubes, etc get a nice O(1) formula.

            Another good story with the same theme is https://code.visualstudio.com/blogs/2018/03/23/text-buffer-reimplementation. “You can always rewrite hot spots in a faster, lower level language” isn’t generally true if you combine two arbitrary languages.

            1. 1

              Thanks for the explanation, I’ll update the article.

              1. 1

                A fairer comparison would thus mean using LLVM on the Python code too (see: Numba). Given the example domain, I’d further be interested to know the speed difference between Rust on the CPU to Python on the GPU.

                1. 1

                  These are all toy examples, the point was never that Rust is faster, as I mention Rust can actually be slower than Cython. The updated article points out on the default compiler on macOS is clang, so you might get that optimization with Cython too.

              1. 1

                That’s odd, I thought I did a site search by both title and URL but it did not show up. Must have overlooked it somehow, my mistake.

              1. 25

                First off, I’m not out to belittle Ken Thompson’s efforts here. Writing an assembler, editor and basic kernel in three weeks is highly respectable work by any standard. It’s also a great piece of computer lore and fits Blow’s narrative perfectly - especially with Kernighan’s little quip about productivity in the end. Of course, we don’t know how “robust” Thompson’s software was at this stage, or how user friendly, or what kind of features it had. I’m going to boldly claim it would’ve been a hard sell today, even if it did run on modern hardware.

                Ooh, I can answer this! Ken Thompson’s first version of Unix was just the barebones he needed to run Space Traveler, a game he wanted to port from MULTICS. It wasn’t anything close to what we’d recognize as a Unix.

                Despite that, I’m willing to bet a few bucks there are more people around today (including youngsters) who can program in C than ever before, and that more C and assembly code is being written than ever before.

                I was thinking the same thing. Maybe in relative numbers there are fewer people who understand the low level things, but the absolute numbers are magnitudes higher.

                1. 11

                  Maybe in relative numbers there are fewer people who understand the low level things, but the absolute numbers are magnitudes higher.

                  This is very powerful thinking and unlocks a lot of counter narratives to popular beliefs.

                  I remember when the Wii came out and people were worried that games would stop being “good” cuz so many casual games existed now (see also mobile game stuff). The ratio with sales was super weird but ultimately we were looking at bigger pie stuff.

                  I think a similar thing has happened with software sales as well relative to mobile applications

                  1. 3

                    Ooh, I can answer this! Ken Thompson’s first version of Unix was just the barebones he needed to run Space Traveler, a game he wanted to port from MULTICS. It wasn’t anything close to what we’d recognize as a Unix.

                    Not only that, you can actually run it in SIMH’s PDP-7 emulator! The original source code is available, including the source for the game Space Travel. The ~3k lines kernel is bundled with ~18k user space programs (assembler, debugger, text editor, disk/file management utilities, some games, etc.). Maybe less barebones than you’d expect. To me this inititial UNIX version is akin to prototype to see if the approach will work (later evolved and refined into Research UNIX of course).

                    1. 2

                      it wasn’t anything close to what we’d recognize as a Unix

                      Do you have any recommendations for philosophical/historical/narrative reading produced by these folks or their contemporaries? I’ve consumed plenty of their writing on the technical side, but most of the philosophical/historical/narrative accounts I’m aware of seem to have gone through the filters of other people involved with later/divergent parts of the historical trajectory (GNU, Linux, FSF, and so on).

                      1. 2

                        Have a look around on multicians.org. I’ve enjoyed some of Tom Van Vleck’s pieces like his history of electronic mail.

                        1. 4

                          Unix Hater’s Handbook http://web.mit.edu/~simsong/www/ugh.pdf

                          Lion’s Commentary on Unix https://cs3210.cc.gatech.edu/r/unix6.pdf

                    1. 42

                      In my day it was called HACKING and it documented the code as it stood two years ago, if you were lucky.

                      1. 9

                        Where I’ve seen this done well, it has always been rolled into CONTRIBUTING as a lightweight place to point new developers toward any non-obvious logical entry points, rather than as a place to bother with articulating high-level architectural decisions.

                        That is to say, it firmly sticks to the “what” instead of the “why” and makes no pretense of being a comprehensive document. As you point out, the “why” is probably out of date, but “why” also doesn’t really matter to anyone who hasn’t already wrapped their brain around the whole mental model of the application.

                        No ready examples immediately jump to my mind, but I know I recently saw a good page in a go project noting the rough equivalent of a main() that was neither the primary entry point of the application itself nor a cleanly separated area of concern (i.e. module). One could argue this already suggests a poorly designed architecture, so I wish I had this example ready at hand - this is where I’d make some kind of argument about not letting “perfect” get in the way of “practical”, but of course I see the silliness of that point when I’m already speaking at a purely theoretical level.

                      1. 2

                        Python 2 support has been dropped by pip the Python Packaging Authority (PyPA) at the Python Packaging Index (PyPI), which is the default configuration for every distribution of pip I’m aware of. If you have critical dependencies on Python 2 packages and are unwilling to migrate to Python 3, set up your own package index or pull the libraries you depend on directly into the vcs for your legacy project (which is definitely more work than migrating to Python 3, but is a choice you have).

                        Another easier alternative would be to pull the libraries you depend on directly into the vcs for your legacy project.

                        1. 8

                          This is a change to pip, not to pypi. You can still use an older version of pip on Python 2 to install packages from PyPI. (That might change in the future, but it doesn’t seem to be under consideration in the maintainers threads on the issue.)

                          1. 2

                            That wasn’t what I read the change to mean, so absolutely needed your clarification. Cheers.

                          2. 3

                            Vendoring dependencies is pretty mechanical. Migrating to Python 3 is only partially mechanical. So I’m not sure why you would say the former is more work. It doesn’t seem like it to me. By far.

                            1. 1

                              I wrote the parenthetical statement in reference to setting up your own index, then went back to add the alternative option (of vendoring dependencies), so that was an unintended misstatement. Thanks for the clarification; you are completely correct. Original comment now reflects the correction above.

                              1. 2

                                Interesting, okay. Depending on circumstances, setting up your own index could also be easier than migrating to Python 3 though too. Migrating to Python 3 can be exceptionally difficult. I’ve lived through it.

                                1. 1

                                  I talked a friend through the decision at his $work, and they decided the site deployment wasn’t large enough to warrant going in that direction, since it presented an ongoing cost for onboarding future people and maintaining infrastructure. They ultimately pulled all libraries into their own repo, not even vendoring, after going through the process and discovering none were being actively maintained anyway.

                            2. 2

                              Python 2 support has been dropped by pip the Python Packaging Authority (PyPA) at the Python Packaging Index (PyPI)

                              I don’t think that this statement is correct. Pip dropping support for Python 2.7 doesn’t inherently have any implications for what packages can be uploaded or downloaded from PyPI. I’ve not heard a peep about dropping support for any version of Python on PyPI.

                              Vendoring your unmaintained Python 2.7 dependencies may be useful for other reasons: while you’ll still be able to use Pip 2.3.x for a while it will eventually atrophy, like everything else in the 2.7 ecosystem. Vendoring may ease any sustaining engineering you do, and it’ll help avoid use of an unmaintained client application. However, the version of Pip shipped with your Linux distribution will doubtless continue to be supported (meaning: receive security updates to the TLS stack) for years, and PyPI is unlikely to break it.

                            1. 17

                              Looks like people more and more people are realising that the next usability iteration on terminal is seeing the result as you type. More applications keep implementing this workflow, probably shells will implement it in a general fashion in years to come.

                              Some years ago I hacked together a small curses program that accepted a command with a placeholder and presented a prompt that would re-run the command with the new input on each krypress. I never published it because it was very hacky and quite dangerous if you’re not careful.

                              1. 7

                                This is very true for text editors as well IMO, which is why I use kakoune which shows the incremental results as you preform complex combinations of actions or select based on a regex.

                                1. 3

                                  I think you can sort of do this with https://github.com/lotabout/skim#as-interactive-interface – perhaps even integrate that into the shell itself

                                  fzf may have a similar option

                                  the problem is with process spawning overhead in my opinion – doing it for every keystroke needs debouncing and at that point the UI starts to lag. If apps have native support for it they can do something more efficient

                                  1. 1

                                    Yes. It did essentially that that you linked. The UI doesn’t need to get unresponsive, text input is decoupled from external process execution.

                                    1. 2

                                      I don’t mean that the textbox itself becomes unresponsive but rather that the preview will have to endure the cost of process startup and starting from scratch and depending on the operation that is previewed that can be expensive. I have had this experience with the exact ag example… ag takes time to search things, but depending on previous preview it may not have to search the entire space again.

                                      st is very good at dampening the impact but things can be better with a different architecture

                                  2. 2

                                    I’m interested to see what kinds of things Jupyter might inspire in shells. The notebook workflow mostly fits tasks with requirements halfway between an interactive shell and an executable file (as in exploratory data analysis and the like), but the concept has already made its way over to the text editor side (in VSCode you can use a magic comment command to delineate and execute individual code cells within a file to view output while still editing, as if it were a notebook). I wonder what that might conceptually look like if taken to the shell side instead of the editor side.

                                    1. 0

                                      Aren’t you describing fish? :)

                                    1. 2

                                      Interesting overview of the Nix approaches, so thanks for sharing. My curiosity has been piqued by all the NixOS posts I’ve been seeing even though I don’t run it myself. Any reports from folks running a bunch of devices like the one from this post on their home network? What are you using them to do?

                                      My home router/firewall/dhcp/ipsec server (atom x86-64) and file/media/proxy/cache/print server (celeron x86-64) are OpenBSD, so when I bought a Beaglebone Black (ARMv7) to toy with, it was fun to go down a rabbit hole pretending I was @tedu (his 2014 post) to learn about diskless(8) and pxeboot(8) and how to netboot via uboot. This ended being pure experimentation since the actual parallelized work I do at home is on a single beefy Linux workstation (hard requirement on Nvidia GPU for now) and I’m not a professional sysadmin. The BBB sits disconnected in a drawer, but the setup lives on as the mere handful of config line changes required to set up tftpd(8) on the file server and point dhcpd(8) to it from the router, so I gained a more complete understanding of those as a neat side effect of experimenting. At some point in the next couple years I’m going to want to play with a RISC-V SoC, but that’s going to mean looking at Linux again unless I magically become competent to write my own drivers.

                                      1. 8

                                        I just converted my last non-NixOS machine yesterday, so I’ll share my experience =]

                                        I currently have 5 machines running NixOS and deployed using NixOps (to a network called ekumen):

                                        • laptop, ThinkPad T14 AMD (odo)
                                        • workstation: Ryzen 9 3900X, 128GB RAM (takver)
                                        • compute stick: quad core Intel Atom, 2GB RAM (efor)
                                        • rpi: 3B+, 1 GB RAM (gvarab)
                                        • chromebox: i7, 16GB RAM (mitis)

                                        I set up the workstation and chromebox as remote builders for all systems, just as @steinuil did in the post. I’m using the rpi for running Jellyfin (music) and Nextcloud (for sharing calendars and files with my spouse), and setting up the chromebox to be an IPFS node for sharing research data. The laptop and workstation are using home-manager for syncing my dev environment configurations, but I do most of the dev/data analysis in the workstation (which has gigabit connections to the internet), and while the laptop is often more than enough for dev, my home connection is way too slow for anything network-intensive (so, it serves as a glorified SSH client =P)

                                        They are all wired together using zerotier, and services running in the machines are bound to the zerotier interface, which ends up creating a pretty nice distributed LAN.

                                        I don’t have my configs in public (booo!), because I’ve not been too good on maintaining secrets out of the configs. But @cadey posts are a treasure trove of good ideas, and I also enjoyed this post and accompanying repo as sources of inspiration.

                                        1. 1

                                          I don’t really see the value nixops provides over nixos-rebuild which can work over ssh.

                                          1. 1

                                            That’s a fair point. Part of using nixops was about exploring how to use it later for other kinds of deployment (clouds), and it is a bit excessive for my use case (especially since I use nixops to deploy locally in the laptop =P).

                                            A lot of my nix experience so far is seeing multiple implementations of similar concepts, but I also feel like I can refactor and try other approaches without borking my systems (too much).

                                        2. 2

                                          On the Pi from the post I run:

                                          • syncthing
                                          • Navidrome so I can listen to my music library on my phone
                                          • twkwk, a small program that serves my TiddlyWiki instance
                                          • synapse, a torrent client which is actually the Rust program I mentioned in the post
                                          • some SMB shares with Samba that serve the two drives I use for torrents and music
                                        1. 14

                                          I’ve been really tempted to buy a remarkable2. But the reviews I see say it’s great for note taking but not so great for just reading PDFs. Mostly I want to read PDFs. I’m still on the fence.

                                          1. 14

                                            As long as your PDFs don’t require color, it is 100% worth it. Definitely one of my favorite devices at the moment.

                                            1. 5

                                              Same. In the month or so I’ve had one, it hasn’t caused me a single frustration (and I’m the kind of person who gets annoyed at the user interfaces of my own Apple products). It works exactly as advertised. Anyone who thinks it might be worth the price tag should watch a third party review video and check out the official and awesome list projects. It has been awhile since I’ve stayed this excited about a new device so long after buying it.

                                            2. 12

                                              I picked one up recently hoping that I could migrate a lot of my ebooks and pdfs to it. I don’t plan on returning it, but I wouldn’t recommend it.

                                              I was a huge fan of the kindle dx, but I’ve managed to break the buttons on a couple which renders them practically useless. I was on the fence with the first remarkable device but figured I’d given the latest iteration a shot. I figured it’d be a good DX substitute. It’s not. I want to like it, the physical design is really good, but the software sucks.

                                              I have a large collection of documents (epub/pdfs) that I was looking forward to getting on the device. Largely a mix of books published in electronic formats from regular publishers (O’Reilly, Manning, PragProg, etc.) as well as a few papers and docs I’ve picked up here and there.

                                              First, the reMarkable desktop/mobile app that you have to rely on for syncing is a little wonky. Syncing between the device and mobile/desktop versions of the app works, but leaves a little to be desired. Second, I have yet to load a pdf or epub that isn’t brutally slow to navigate (just page by page). If the document has images or graphics (even simple charts and illustrations) it will affect navigation performance. Occasionally a document will load relatively quickly, and navigate reasonable well, only to slow down after a few page turns. Epubs tend to be a little more difficult to work with - particularly if you decide to change the font. All I have to compare this device to is my broken DX, which, everything considered, positively smokes the reMarkable.

                                              It’s usable. It works alright for PDFs, less so for epubs. On the positive side, the battery life is quite good.

                                              1. 3

                                                I agree with your analysis in most regards. Syncing a lot of ebooks and pdfs to it is not something at which it would excel by default. I have a large Calibre library, and I haven’t synced it over for that reason. However, it’s something I’m looking forward to investigating with KOReader, which supports the reMarkable.

                                                I haven’t experienced the lag that you talk about, but can understand that that would be bothersome – though I definitely have experienced the “wonkiness” of the companion apps.

                                                1. 1

                                                  My understanding is that epubs are converted to PDF before being synced? Is that actually the case?

                                                  1. 4

                                                    It renders the epub to pdf for display but that’s all in-memory. It’s still an epub on disk.

                                                    1. 1

                                                      I don’t know. I’ve got a couple books that are both pdf and ePub, and the pdf version behaves a little better. You can also resize and change fonts for ePub doc, but not for PDFs.

                                                      1. 1

                                                        Along these lines, another interesting observation I’ve made has to do with the way some kinds of text get rendered. In particular, I’ve encountered epubs with code listings that render fine in other apps and on other devices, but render horribly on the remarkable2 device. Interestingly, in some of those cases I will also have a publisher provided PDF that renders just fine.

                                                        Further, epubs and PDFs are categorized differently in both the app and the device. With epubs you can change the justification, page margins, line spacing, fonts, and font size. With PDFs you have fewer options, but you do have the ability to adjust the view (which is great for papers since you can get rid of the margins).

                                                      2. 2

                                                        I don’t think so – from my playing around with ssh, there are definitely some epubs stored on device. I actually think the browser extension generates epubs, rather than pdfs which was surprising.

                                                        1. 2

                                                          Huh. Cool. Hmmm. The real reason I shouldn’t get one is that I always fall asleep with my e-reader and it often bounces off my face.

                                                          1. 3

                                                            That’s a pro, for the device, it weighs next to nothing. I’ve damn near knocked myself out dropping an iPad Pro on my head when reading in bed.

                                                            1. 1

                                                              For me, it’s more the fact that the Kobo then ends up falling onto the floor. I’m not crazy with that with a $120 device, so …

                                                    2. 7

                                                      I own Gen 1 and Gen 2. I love the simplicity and focus of the device. It’s an amazing… whiteboard.

                                                      Note taking is not suuuper great. Turns out marking up a PDF to take notes actually isn’t that great because the notes quickly get lost in the PDF. It’s not like in real life, where you can put a sticky note to jump to that page. The writing experience is fantastic though. I have notebooks where I draw diagrams/ideas out. I like it for whiteboarding type stuff.

                                                      Reading is terrible. I mean, it works. Searching is painfully slow. The table of contents doesn’t always show up (even though my laptop PDF reader can read the TOC just fine). When you do get a TOC, the subsections are flattened to the top level, so it’s hard to skim the TOC. PDF links don’t work. Text is often tiny, though you can zoom in. EPUBs appear to get converted to PDFs on the fly and their EPUB to PDF conversion sucks. Though, I’ve found doing the conversion myself in Calibre is way better.

                                                      Overall, I like the device for whiteboarding. But it’s kinda hard to recommend.

                                                      1. 2

                                                        Marking up PDFs works better in color, since you can pick a contrasting ink color. I do it in Notability on my iPad Pro (which is also great for whiteboarding / sketching.)

                                                        I was tempted by reMarkable when the first version came out, but I couldn’t see spending that kind of money on something that only does note taking and reading. I’m glad it’s found an audience though, it’s a cool device.

                                                        1. 1

                                                          Turns out marking up a PDF to take notes actually isn’t that great because the notes quickly get lost in the PDF. It’s not like in real life, where you can put a sticky note to jump to that page.

                                                          So far the best experience I’ve seen for this is LiquidText on an iPad Pro. While you can write on the PDF as any other annotator, there’s also a lot of more hypertext type of features, like collecting groups of notes in an index, or writing separate pages of notes that are bidirectionally hyperlinked to parts of the document they refer to. Or do things like pull out a figure from a paper into a sidebar where you attach notes to it.

                                                          The main downside for me is that you do more or less have to go all-on on LiquidText. It supports exporting a workspace to flat PDFs, but if you used the hypertext features in any significant way, the exported PDFs can be very confusing with the lack of expected context.

                                                          1. 1

                                                            Agreed that it is hard to find notes. There should be a way to jump to pages that have notes on them (this is how Drawboard PDF works, for example).

                                                            1. 1

                                                              What is the advantage over drawing on a piece of paper or on a whiteboard, then taking a photo of what you’ve drawn, if needed?

                                                              1. 1

                                                                I tried paper note books, but I’m too messy and make too many mistakes. Erasing, moving, and reordering is hard on paper.

                                                                A whiteboard is pretty good for temporary stuff and erases better than paper. But, it can be a bit messy.

                                                                I also tried Rocketbook for a while. I got the non-microwaveable (yes you read that right) one. That was okay. A little meh for me.

                                                                And of course, you can’t read PDFs on any of these.

                                                          1. 15

                                                            For short and ephemeral text and images, check out the “note to self” feature of Signal. It appears as the name of one of your contacts. This requires your devices be linked, but approximates the lazy email-it-to-yourself approach with an added layer of reasonable privacy.

                                                            1. 3

                                                              Signal is my usual go to. What I’m sending is often long untypable passwords, so I keep the disapearing messages set to 5 minutes as well.

                                                              1. 2

                                                                I use this feature all the time. It’s great for sending non-url things to other devices. For URL things, I uuse Firefox’s “Send to Device” function that works when you have browser sync enabled.

                                                                Edit to Add: Slightly OT, in F-Droid there is an app called Exif-Scrambler. It’s a wonderful tool for scrubbing metadata out of pictures. Share to Exif-Scrambler, then E-S will scrub metadata and present you with another share option, at which point I use Note to Self on Signal.

                                                                1. 2

                                                                  Yep! I’ve been using this too but it felt clunky still.

                                                                  1. 1

                                                                    How so? What would you change?

                                                                    1. 1

                                                                      I don’t think there’s much I could change but it isn’t as seamless as iOS Universal Clipboard or Airdrop

                                                                  2. 1

                                                                    Does it bother you that Signal for desktop is not encrypted?

                                                                    1. 1

                                                                      I assume you mean “not encrypted at rest”? Doesn’t bother me personally (if you control my user account you have ~everything anyways).

                                                                  1. 13

                                                                    I’ve really enjoyed reading this blog over the last few weeks. He has a great perspective and explains the legal side well. Seems like there is an “Open Source Industrial Complex” where lots of money is made selling products and having conferences about “open source”.

                                                                    1. 5

                                                                      You’ll hear people who work in the field joke about a “compliance-industrial complex”. I think that started back in the early 2000s, after big companies started permitting use of open source in masse. Salespeople for nascent compliance solutions firms would fly around giving C-level officers heartaches about having to GPL all their software. My personal experience of those products, both for ongoing use and for one-off due diligence, is that they’re way too expensive, painful to integrate, just don’t work that well, and only make cost-benefit if you ingest a lot of FUD. Folks who disagree with me strongly on other issues, like new copyleft licenses, agree with me here.

                                                                      That said, I don’t mean to portray what’s going on in the open source branding war as any kind of conspiracy. There are lots of private conversations, private mailing lists, and marketing team meetings that don’t happen in the open. But the major symptoms of the changing of the corporate guard are all right out there to be seen online. That’s why I walked through the list of OSI sponsors, and linked to the posts from AWS and Elastic. It’s an open firefight, not any kind of cloak-and-dagger war.

                                                                      1. 7

                                                                        Agreed. I’m getting increasingly tired by some communities’ (especially Rust’s) aggressive push of corporate-worship-licenses like BSD, MIT (and against even weak copy-left licenses like MPL).

                                                                        1. 17

                                                                          I’m saying this with all the respect in the world, but this comment is so far detached from my perception of license popularity that I wanna know from which niche of the tech industry this broad hatred of Rust comes from. To me it seems like one would have to hack exclusively on C/C++/Vala projects hosted on GNU Savannah, Sourcehut or a self-hosted GitLab instance to reach the conclusion that Rust is at the forefront of an anti-copyleft campaign. That to me would make the most sense because then Rust overlaps with the space you’re occupying in the community much more than, say, JavaScript or Python, where (in my perception) the absolute vast majority of OSS packages do not have a copyleft license already.

                                                                          1. 3

                                                                            Try shipping any remotely popular library on crates.io and people heckle you no end until they get to use your work under the license they prefer.

                                                                            Lessons learned: I’ll never ship/relicense stuff under BSD/MIT/Apache ever again.

                                                                            1. 2

                                                                              this broad hatred of Rust comes from

                                                                              Counter culture to the Rust Evangelism Strike Force: Rust evangelists were terribly obnoxious for a while, seems like things calmed down a bit, but the smell is still there.

                                                                              1. 1

                                                                                I think it’s beneath this site to make reactionary nonsense claims on purpose.

                                                                                1. 2

                                                                                  How is criticizing a (subset) of a group for their method of communication “reactionary”?

                                                                                  1. 1

                                                                                    I’m saying soc’s claim about Rust pushing for liberal licensing is nonsense and probably reactionary to the Rust Evangelism Strike Force if @pgeorgi’s explanation is true. My point is that “counter culture” is not an excuse to make bad arguments or wrong claims.

                                                                                    1. 2

                                                                                      OK, that makes a bit more sense.

                                                                                  2. 2

                                                                                    reactionary nonsense claims

                                                                                    like talking about some “broad hatred of Rust” when projects left and right are adopting it? But the R.E.S.F. is really the first thing that comes to my mind when thinking of rust, and the type of advocacy that led to this nickname sparked some notable reactions…

                                                                                    (Not that I mind rust, I prefer to ignore it because it’s just not my cup of tea)

                                                                              2. 7

                                                                                I won’t belabor the point, but I’d suggest considering that some of those project/license decisions (e.g. OpenBSD and ISC) may be about maximizing the freedom (and minimizing the burden) shared directly to other individual developers at a human-to-human level. You may disagree with the ultimate outcome of those decisions in the real world, but it would be a wild misreading of the people behind my example as “corporate worshipping”.

                                                                                As I have said before: “It’s important to remember that GNU is Not Unix, but OpenBSD userland is much more so. There isn’t much reason to protect future forks if you expect that future software should start from first principles instead of extending software until it becomes a monolith that must be protected from its own developers.”

                                                                                Not all software need be released under the same license. Choosing the right license for the right project need not require inconsistency in your beliefs about software freedoms.

                                                                                1. 6

                                                                                  The specific choice of MIT/Apache dual-licensing is so unprincipled and weird that it could only be the result of bending over backwards to accommodate a committee’s list of licensing requirements (it needs to compatible with the GPL versions 2 and 3, it needs a patent waver, it needs to fit existing corporate-approved license lists, etc). This is the result of Rust being a success at all costs language in exactly the way that Haskell isn’t. Things like corporate adoption and Windows support are some of those costs.

                                                                                  1. 3

                                                                                    I can’t speak directly to that example, as I don’t write Rust code and am not part of the Rust community, but it would not surprise me if there were different and conflicting agendas driving licensing decisions made by any committee.

                                                                                    I do write code in both Python and Go (languages sharing similar BSD-style licensing permissiveness), and my difficult relationship to the organization behind Go (who is also steward of its future) is not related in any way to how that language has been licensed to me. Those are a separate set of concerns and challenges outside the nature of the language’s license.

                                                                            1. 25

                                                                              Data tech is a massive and intertwined ecosystem with a lot of money riding on it. It’s not just about compute or APIs, that’s a fairly small part.

                                                                              • What file formats does it support?
                                                                              • Does it run against S3/Azure/etc.?
                                                                              • How do I onboard my existing data lake?
                                                                              • How does it handle real-time vs batch?
                                                                              • Does it have some form of transactions?
                                                                              • Do I have to operate it myself or is there a Databricks-like option?
                                                                              • How do I integrate with data visualization systems like Tableau? (SQL via ODBC is the normal answer to this, which is why it’s so critical)
                                                                              • What statistical tools are at my disposal? (Give me an R or Python interface)
                                                                              • Can I do image processing? Video? Audio? Tensors?
                                                                              • What about machine learning? Does the compute system aid me in distributed model training?

                                                                              I could keep going. Giving it a JavaScript interface isn’t even leaning in to the right community. It’s a neat idea, for sure, but there’s mountains of other things a data tech needs to provide just to be even remotely viable.

                                                                              1. 7

                                                                                Yeah this is kinda what I was going to write… I worked with “big data” from ~2009 to 2016. The storage systems, storage formats, computation frameworks, and the cluster manager / cloud itself are all tightly coupled.

                                                                                You can’t buy into a new computation technology without it affecting a whole lot of things elsewhere in the stack.

                                                                                It is probably important to mention my experience was at Google, which is a somewhat unique environment, but I think the “lock in” / ecosystem / framework problems are similar elsewhere. Also, I would bet that even at medium or small companies, an individual engineer can’t just “start using” something like differential dataflow. It’s a decision that would seem to involve an entire team.

                                                                                Ironically that is part of the reason I am working on https://www.oilshell.org/ – often the least common denominator between incompatible job schedulers or data formats is a shell script!

                                                                                Similarly, I suspect Rust would be a barrier in some places. Google uses C++ and the JVM for big data, and it seems like most companies use the JVM ecosystem (Spark and Hadoop).

                                                                                Data tech also can’t be done without operators / SREs, and they (rightly) tend to be more conservative about new tech than engineers. It’s not like downloading something and trying it out on your laptop.

                                                                                Another problem is probably a lack of understanding of how inefficient big data systems can be. I frequently refer to McSherry’s COST paper, but I don’t think most people/organizations care… Somehow they don’t get the difference between 4 hours and 4 minutes, or 100 machines and 10 machines. If people are imagining that real data systems are “optimized” in any sense, they’re in for a rude awakening :)

                                                                                1. 4

                                                                                  Believe that andy is referring to this paper if anyone else is curious.

                                                                                  (And if you weren’t let me know and I’ll read that one instead. :] )

                                                                                  1. 4

                                                                                    Yup that’s it. The key phrases are “parallelizing your overhead”, and the quote “You can have a second computer once you’ve shown you know how to use the first one.” :)


                                                                                    The details of the paper are about graph processing frameworks, which most people probably won’t relate to. But it applies to big data in general. It’s similar to experiences like this:


                                                                                    I’ve had similar experiences… 32 or 64 cores is a lot, and one good way to use them all is with a shell script. You run into fewer “parallelizing your overhead” problems. The usual suspects are (1) copying code to many machines (containers or huge statically linked binaries), (2) scheduler delay, and (3) getting data to many machines. You can do A LOT of work on one machine in the time it takes a typical cluster to say “hello” on 1000 machines…

                                                                                  2. 1

                                                                                    That’s a compelling explanation. If differential dataflow is an improvement on only one component, perhaps that means that we’ll see those ideas in production once the next generation of big systems replaces the old?

                                                                                    1. 2

                                                                                      I think if the ideas are good, we’ll see them in production at some point or another… But sometimes it takes a few decades, like algebraic data types or garbage collection… I do think this kind of big data framework (a computation model) is a little bit more like a programming language than it is a “product” like AWS S3 or Lambda.

                                                                                      That is, it’s hard to sell programming languages, and it’s hard to teach people how to use them!

                                                                                      I feel like the post is missing a bunch of information: like what kinds of companies or people would you expect to use differential dataflow but are not? I am interested in new computation models, and I’ve heard of it, but I filed it in the category of “things I don’t need because I don’t work on big data anymore” or “things I can’t use unless the company I work for uses it” …

                                                                                  3. 2

                                                                                    The above is a great response, so to elaborate on one bit:

                                                                                    What statistical tools are at my disposal? (Give me an R or Python interface)

                                                                                    It’s important for engineers to be aware of how many non-engineers produce important constituent parts of the data ecosystem. When a new paper comes out with code, that code is likely to be in Python or R (and occasionally Julia, or so I’m hearing).

                                                                                    One of the challenges behind using other great data science languages (e.g. Scala) is that there may be an ongoing and semi-permanent translation overhead for those things.

                                                                                    1. 1

                                                                                      all of the above + does it support tight security and data governance?

                                                                                    1. 11

                                                                                      Built a desktop machine (Ryzen 5600) and installed OpenBSD on it because FreeBSD wasn’t supporting my wireless and ethernet cards. Did a quick online search to find out that OpenBSD supports the ethernet card. Was pleasantly surprised to find out that it does support my wireless card too without any extra hassle.

                                                                                      Now, I need to set up cwm so that it is closer in behaviour to a tiling window manager, and install rakubrew so that I can build Raku versions easily.

                                                                                      After that’s done, I’ll need to restore some data from a backup. What I am most interested in is to copy over my ~/.ssb folder so that I can get back on the Scuttleverse after a break of a few months.

                                                                                      1. 3

                                                                                        Welcome to the fun! Don’t overlook afterboot(8) or that your existing preferred wm might be there in packages (though cwm is great too).

                                                                                        My own workstation is the only system not on OpenBSD (some current projects require my nvidia GPU), but always interested to read postmortems from new switchers on recent hardware.

                                                                                        1. 2

                                                                                          Hey thanks for the link! Checking it out.

                                                                                          I hope to write something after a few weeks of using this as my daily driver.

                                                                                      1. 27

                                                                                        With respect and love, @pushcx, this ain’t it.

                                                                                        In my experience moderating internet forums there are precisely two kinds of people that are interested in moderating:

                                                                                        • people motivated by a deep desire to improve the communities they participate in and who plan to moderate with the lightest possible touch in order to grow the community and allow it to express its norms and standards
                                                                                        • petty fucking assholes who plan to wield their power to grind axes, antagonize enemies, and reshape the community as they themselves see fit

                                                                                        These posts bring out both of these kinds of people, but unfortunately you’ll be lucky if you get a 1:10 ratio of good to bad.

                                                                                        Now, you’ve said you plan to announce the new moderator slate. That means, at best, you and @irene plan to try to separate the wheat from the chaff. My $.02: don’t. Pick ten people that each of you have interacted with and think you can live with as moderators. Then ask them directly. If you are lucky you’ll get two of them to accept and only reluctantly so. Then you’ll have found good moderators.

                                                                                        1. 11

                                                                                          I spent the last year trying that, though I contacted nine users rather than ten. None were both interested and available, though one maybe came around a few days ago and I believe applied earlier today.

                                                                                          1. 3

                                                                                            None were both interested and available, though one maybe came around a few days ago and I believe applied earlier today.

                                                                                            I hope like hell they did, and wish you the absolute best of luck finding a second (or more).

                                                                                          2. 9

                                                                                            I am deeply distressed by the direction these comment threads took for many reasons, but not least because I had believed, perhaps naively, that this community was distinct in how its members practiced a cautious self-moderation to avoid making - or even cast votes behind - statements they lacked expert authority to make, knowing those statements could (and likely would), be seen by actual experts who could be engaged in earnest discussion.

                                                                                            I post under my real name because it means I have to stand by the things I say, and it forces me to pause to consider the effect my words will have on the people I say those things to. Posting anonymously or pseudonymously removes the first obligation one has to oneself, but nothing removes the second obligation one has to others.

                                                                                            I hope from the bottom of my heart that these threads are uniquely a product of the generally elevated blood pressure of all people at this particular moment, and is not representative of anything else.

                                                                                            I have many things to say about the subject matter discussed in these threads, but this is not the place I will say them.

                                                                                            1. 6

                                                                                              I can’t agree any more strongly with this post. This (request for moderators) is the way to get toxic moderators. @owen is absolutely right about how to get good moderators, pick them based on their existing community actions and reach out to them. Most of who you reach out to are wonderful, sane people so they will decline… continue down the list. I have moderated communities for a couple of decades and the path laid out by @owen is the only one I have had any success with.

                                                                                              1. 3

                                                                                                You are extremely correct. As a member of the third category, I understand that moderation is a powerful tool which I would almost certainly misuse and with which I should not be trusted. And I am simply not a nice person. But, with that said, I at least have an excellent record of antifascist posting.

                                                                                                1. 2

                                                                                                  This is good advice, and in other communities I’ve been a part of it was exactly how moderators were selected. Some served for 5-10+ years, all part of the same initial friend group / community that seeded it.

                                                                                                  I did assume that @pushcx would still have the ultimate say in getting rid of any false positives, so to speak, considering we have a public moderation log and it would be somewhat obvious if someone was abusing their power, so it might be OK still.

                                                                                                1. 7

                                                                                                  The lack of what this proposal describes as package level enforcement at construction time has been a source of much verbosity and a frustrating need to write obtuse validation code in my own go projects (or rather, to remember the need to).

                                                                                                  I am completely unqualified to comment on this solution as it pertains to programming language design, but I can affirm that the motivation reflects something I’ve found wanting. So, thank you for sharing.

                                                                                                  1. 10

                                                                                                    I search lobsters for that thing I saw once a few months ago that I thought was really cool and kept open in a tab on one of my phones for a while until I ended up closing it thinking “I’ll remember how to get that.”

                                                                                                    1. 4

                                                                                                      Given how completely your description fits my own browsing behavior, it might be worth sharing that I’ve been largely successful at sticking to a low-friction system without relapsing to this previous approach.

                                                                                                      I liberally use the “save” feature here, the “star” feature on GitHub, and the “favorite” feature on the orange site. The latter two are publicly viewable.

                                                                                                      I make no categorization or organizational attempt upon saving items, nor do I pretend that there is any specific plan to return to those items. These lists merely serve as a smaller subset of items I am able to manually look over when I want to recall something I once found of interest. It has been highly effective for me, even given how low my bar is for adding something to those lists; they’re not curated, merely a log of my gut reaction that something might be of interest.

                                                                                                    1. 45

                                                                                                      Is this a paid position?

                                                                                                      1. 60

                                                                                                        Rather the opposite for you, I have a stack of past-due therapy bills you’re delinquent on.

                                                                                                        1. 7

                                                                                                          There’s always the option for cathartic revenge by assigning him the Victor Frankenstein hat.

                                                                                                          1. 10

                                                                                                            Please, let’s be reasonable here. He should delegate assigning the hat to one of the new mods.

                                                                                                      1. 24

                                                                                                        It is safe to say that nobody can write memory-safe C, not even famous programmers that use all the tools.

                                                                                                        For me, it’s a top highlight. My rule of thumb is that if OpenBSD guys sometimes produce memory corruption bugs or null dereference bugs, then there is very little chance (next to none) than an average programmer will be able to produce a secure/rock solid C code.

                                                                                                        1. -1

                                                                                                          My rule of thumb is that if OpenBSD guys sometimes produce memory corruption bugs or null dereference bugs, then there is very little chance (next to none) than an average programmer will be able to produce a secure/rock solid C code.

                                                                                                          Why do you think “the OpenBSD guys” are so much better than you?

                                                                                                          Or if they are better than you, where do you get the idea that there isn’t someone that much better still? And so on?

                                                                                                          Or maybe let’s say you actually don’t know anything about programming, why would you trying to convince anyone else of anything coming directly from a place of ignorance? Can your gods truly not speak for themselves?

                                                                                                          I think you’re better than you realise, and could be even better than you think is possible, and that those “OpenBSD guys” need to eat and shit just like you.

                                                                                                          1. 24

                                                                                                            Why do you think “the OpenBSD guys” are so much better than you?

                                                                                                            It’s not about who is better than who. It’s more about who has what priorities; OpenBSD guys’ priority is security at the cost of functionality and convenience. Unless this is average Joe’s priority as well, statistically speaking OpenBSD guys will produce more secure code than Joe does, because they focus on it. And Joe just wants to write an application with some features, he doesn’t focus on security that much.

                                                                                                            So, since guys that focus on writing safe code sometimes produce exploitable code, then average Joe will certainly do it as well.

                                                                                                            If that weren’t true, then it would mean that OpenBSD guys security skill is below average, which I don’t think is true.

                                                                                                            1. 5

                                                                                                              OpenBSD guys’ priority is security at the cost of functionality

                                                                                                              I have heard that claim many times before. However, in reality I purely use OpenBSD for convenience. Having sndio instead of pulse, having no-effort/single command upgrades, not having to mess with wpa_supplicant or network manager, having easy to read firewall rules, having an XFCE desktop that just works (unlike Xubuntu), etc. My trade-off is that for example Steam hasn’t been ported to that platform.

                                                                                                              So, since guys that focus on writing safe code sometimes produce exploitable code, then average Joe will certainly do it as well.

                                                                                                              To understand you better. Do you think average Joe both will use Rust and create less mistakes? Also, do you think average Joe will make more logic errors with C or with Rust? Do you think average Joe will use Rust to implement curl?

                                                                                                              I am not saying that you are wrong - not a C fan, nor against Rust, quite the opposite actually - but wonder what you base your assumptions on.

                                                                                                              1. 3

                                                                                                                I’d also add that there is deep & widespread misunderstanding of the OpenBSD philosophy by the wider developer community, who are significantly influenced by the GNU philosophy (and other philosophies cousin to it). I have noticed this presenting acutely around the role of C in OpenBSD since Rust became a common topic of discussion.

                                                                                                                C, the existing software written in C, and the value of that existing software continuing to be joined by new software also written in C, all have an important relationship to the Unix and BSD philosophies (most dramatically the OpenBSD philosophy), not merely “because security”.

                                                                                                                C is thus more dramatically connected to OpenBSD than projects philosophically related to the “GNU is Not Unix” philosophy. Discussions narrowly around the subject of C and Rust as they relate to security are perfectly reasonable (and productive), but OpenBSD folks are unlikely to participate in those discussions to disabuse non-OpenBSD users of their notions about OpenBSD.

                                                                                                                I’ve specifically commented about this subject and related concepts on the orange site, but have learned the lesson presumably already learned many times over by beards grayer than my own: anyone with legitimate curiosity should watch or read their own words to learn what OpenBSD folks care about. Once you grok it, you will see that looking to that source (not my interpretation of it) is itself a fundamental part of the philosophy.

                                                                                                                1. 1

                                                                                                                  If that weren’t true, then it would mean that OpenBSD guys security skill is below average, which I don’t think is true.

                                                                                                                  At least not far above average. And why not? They’re mostly amateurs, and their bugs don’t cost them money.

                                                                                                                  And Joe just wants to write an application with some features, he doesn’t focus on security that much.

                                                                                                                  I think you’re making a straw man. OpenBSD people aren’t going to make fewer bugs using any language other than C, and comparing Average Joe to any Expert just feels sillier and sillier.

                                                                                                                  1. 3

                                                                                                                    What’s your source for the assertion ‘They’re mostly amateurs’?

                                                                                                                    1. 2

                                                                                                                      What a weird question.

                                                                                                                      Most openbsd contributors aren’t paid to contribute.

                                                                                                                      1. 3

                                                                                                                        What a weird answer. Would you also argue that attorneys who accept pro bono work are amateurs because they’re not paid for that specific work?

                                                                                                                        Most of the regular OpenBSD contributors are paid to program computers.

                                                                                                                        1. 1

                                                                                                                          because they’re not paid for that specific work?

                                                                                                                          Yes. In part because they’re not paid for that specific work, I refuse to accept dark_grimoire’s insistence that “if they can’t do it nobody can”.

                                                                                                                        2. 1

                                                                                                                          You seem to be using the word “amateur” with multiple meanings. It can mean someone not paid to do something, aka “not a professional”. But when I use it in day to day conversation I mean something more similar to “hobbyist”, which does not tell much about ability. Also saying they are amateurs, thus do not write “professional” code, implies anyone can just submit whatever patch they want and it will be accepted, which is very far from the truth. I assume with reasonable certainty that you never contributed to OpenBSD yourself, to say that. I am not a contributor, but whenever I look at the source code, it looks better than much of what I saw in “professional” work. This may be due to the focus on doing simple things, and also very good reviews by maintainers. And as you said, the risk of loosing money may be a driver for improvement, but it is certainly not the only one (and not at all for some people).

                                                                                                                          1. 1

                                                                                                                            You seem to be using the word “amateur” with multiple meanings,

                                                                                                                            I’m not.

                                                                                                                            as you said, the risk of loosing money may be a driver for improvement, but it is certainly not the only one

                                                                                                                            So you do understand what I meant.

                                                                                                                    2. -1

                                                                                                                      nailed it