Threads for msingle

  1. 23

    “Why does Thunderbird look so old, and why does it take so long to change?”

    If that actually does bother anyone, can’t they just use a different email client? Wouldn’t it be better to keep Thunderbird working with its existing UI, for those of us that have gotten used to it over the past 20 years?

    1. 16

      I think Thunderbird has lost it’s priorities.

      No one actually cares how modern an email client looks. No one is wearing their email client on their head as a fashion statement. No one is reading their email in closet because they can’t stand the idea that someone would peak over their shoulder and see them using UI from 2004.

      Thunderbird needs some reorganization and usability updates but modernizing the design is not one of them.

      Having worked at plenty of software companies I often find rewrites are often just large feature requests cobbled together under the guise of rewriting the base of the application as the only way to achieve the cluster of new features. Is the old software really that bad or has it grown in a disorganized way and/or do you just not understand it?

      1. 22

        I dunno, I won’t use a mail app that looks weird and old. I’d consider using Thunderbird if it looked good and worked better than Mail.app.

        1. 3

          real talk: Thunderbird looks better than it did a couple years back! I booted into it for the first time in 4 or so years and was like “oh this is pretty good”

          Granted I’m in KDE and before I was using it in Mac. But I feel like it’s pretty good for “email power usage”.

          There’s a legit complaint about how the panes are laid out by default, but I think that you can get pretty far by just moving around existing components.

        2. 11

          UI and UI conventions for email have been pretty continuously evolving since we first got GUI mail clients. And that’s without considering that UI conventions in general evolve over time; an application that doesn’t match the UI conventions of the surrounding system is likely to see itself replaced by one that does.

          Which is why keeping the same UI for decades on end is not something that usually works in mass-market software.

          1. 14

            I can confidently say I’ve never stopped using a useful piece of software because they hadn’t updated their UI to keep up with whatever fashion trend is currently hip. On the other hand, I have (repeatedly) stopped using software after a UI overhaul screwed up my existing comfort with the software, opening the door for me to look around and see what other software, while equally alien, might have better functionality.

            Messing with the look and feel of your application is an exercise in wagering your existing users in pursuit of new ones. In commercial projects, that can often make sense. In FLOSS, it rarely does, as your existing users, particularly those that will be the most annoyed by changes to their workflows, are also the most likely to contribute to the project, either through thorough bug reports or pull requests.

            1. 15

              I think it is important to consider the possibility that you are not representative of the mass-market software user. Or, more bluntly: if you were representative of the mass-market software user, the market would behave very differently than what we observe.

              1. 14

                I dunno, I don’t think Thunderbird users in general are representative of the mass-market software user. Most of those are just using Gmail or Outlook.com. Desktop MUA users are few and far between these days and use them specifically because they have specialized mail management requirements.

                1. 6

                  I don’t see where APhilisophicalCat claims to be “representative of the mass-market software user”. I rather would interpret their “In commercial projects, that can often make sense. In FLOSS, it rarely does” as disagreeing with you on whether Thunderbird is “mass-market software”. (I don’t use Thunderbird and I claim no stake in this.)

                  1. 1

                    I think the people building Thunderbird think of it as, or at least aspire to it being, a mass-market application.

                    (standard disclaimer: I am a former employee of the Mozilla Corporation; I have not worked there since the mid-2010s; when I did work there the stuff I worked on wasn’t related to Thunderbird; I am expressing my own personal opinion based on reading the public communications of the Thunderbird team rather than any sort of lingering insider knowledge or perspective)

              2. 1

                I don’t think things evolved, I think we just have a bunch of ways to stylize a list and where buttons to other lists might go. There are trends but at the end of the day you’re reading a list of things.

                The point I was trying to make is that sometimes rewrites are just shifting complexity and that you can satisfy both crowds by working on an old tech stack. Not that there isn’t a market for whatever-UI-trend email apps.

                1. 3

                  I don’t think things evolved, I think we just have a bunch of ways to stylize a list and where buttons to other lists might go.

                  I remember when Gmail first came out, and introduced a broad audience to the switch from email clients that present things in terms of individual messages, to clients that present things in terms of conversations.

                  From an underlying technical perspective this may feel like nothing – just two ways of styling lists, why does it matter? – but from a user interface and experience perspective it was a gigantic shift. It’s rare these days to see an email client or interface that still clings to the pre-Gmail message-based UI conventions.

                  The same is true for other UI conventions like labeling/tagging, “snooze” functions, etc. etc. Sure, in some sense your inbox is still a list or at most a list of trees, but there are many different ways to present such a thing, and which way you choose to present does in fact matter to end users.

                  1. 1

                    Exactly and there isn’t one crowd; you should aim to appease both. Even if Gmail started a new trend.

              3. 5

                I think a lot of people don’t like using “ugly” software. Definitely matters more to nontechies than it does to techies, I think.

                1. 3

                  even techies will look elsewhere if the app gives you the vibe of something that seems to be from the windows 2000 era and thus probably as other problems too (let’s say scaling)

                  But current thunderbird on the desktop looks fine ?!

                  1. 4

                    There’s a certain group (small?) that likes these old interfaces though. Enough that things like SerenityOS are quite popular. Or maybe it’s just me.

                    1. 2

                      I can appreciate a very technical but functional UI where you can find everything you need, but it doesn’t look that fresh. And then there is also the “jankynes” factor, like Handbrake, which looks very janky, but exposes all ffmpeg configuration flags as a UI. In my experience there is a big divide between applications that just look old but provide a lot of functionality, even looking janky at first - and apps that simply got thrown together in a short time and weren’t updated to keep up with modern requirements. One example for the latter is looking at f-droid applications, where a very old looking app can be a good indicator that it has never been updated to modern android permission requirements. Or that your desktop application just doesn’t support scaling and is either tiny on high-DPI or blurry - god forbid you move it between different DPI displays.

                      1. 1

                        Yup, that’s why Sylpheed/Claws were considered examples of a great UI for desktop email: https://www.claws-mail.org/screenshots.php

                    2. 2

                      Techies are just as, if not more, aesthetically conscious when it comes to software than non-techies. They just have different aesthetic preferences.

                    3. 4

                      I agree with the above. I’d much prefer if this announcement was about fundamental things related to the internals of thunderbird, not the chrome.

                      • Random slowness due to background processes
                      • Weird display issues when loading email folders which haven’t been opened for a while
                      • No ability to import/export to other email formats
                      • Search and indexing – serious improvements here would be very welcome

                      I say this as a long-time thunderbird user, who loves the app and hopes to continue using it (my thunderbird profile is 11 years old). Don’t fix what’s not broken, but do fix what is.

                    4. 6

                      If that actually does bother anyone, can’t they just use a different email client?

                      Name one other usable, open-source, mail client that runs on Windows.

                      Looking old covers a lot of things. There are a bunch of things in the Thunderbird that are annoying and probably hard to fix without some significant redesign. Off the top of my head:

                      • it sometimes gets confused by resolution changes (when you plug in an external monitor or use Remote Desktop) and you end up with a window that’s too small to resize and have to restart the app.
                      • It does a load of blocking things on the main thread, which can cause the UI to freeze in exciting ways.
                      • It has modal dialogs for a bunch of things, which freeze other parts of the UI.
                      • The settings interface is hard to find and hard to navigate: different settings are split between different items in different top-level menus.
                      • The way the text widget is built makes it impossible to switch between plain and rich text compose if you change your mind part way through writing.

                      It’s also not clear to me how well the HTML email renderer view is sandboxed. Apple’s Mail.app opens emails in separate sandboxed renderer processes, I’m not sure the extent to which Thunderbird can take advantage of this underlying functionality from Firefox because it’s designed to isolate tabs and all of Thunderbird’s data lives in one tab.

                      1. 1

                        Sylpheed? But now we’re talking about super-esoteric software. I can’t imagine the user that uses sylpheed and thinks “Thunderbird isn’t usable enough”.

                      2. 3

                        familiarity is a usability feature. And I’m sure most UX people are aware of this on some level, but it’s rare to see it articulated

                        1. 4

                          Thank you!

                        1. 3

                          Transformers. It’s literally a transformative technology, and I’m really curious to understand how it works, and why it performs so much better than almost every preceding method.

                          1. 2

                            What does “transformers” mean here? I know it is not Autobots vs. Decepticons ;-) so what sub-field is that in?

                            1. 3
                              1. 1

                                Just a wild guess, but maybe they’re referring to Monad Transformers?

                            1. 2

                              I feel like @mperham might have some worthwhile input as a developer that is: financially successful, and supports open source. Any ideas?

                              1. 6

                                I agree with the principles of the argument which is why I’ve been spending more time looking at htmx.

                                1. 5

                                  HTMX may converse in form data requests and HTML responses, but for this article, eliminating JSON from form submissions is a means to eliminating JavaScript, not an end unto itself. HTMX still requires JavaScript.

                                  1. 3

                                    Of course. I’m mentioning it, not because it avoids JavaScript, but because it encourages using form data (since one of the patterns with htmx to use the same form with and without JSON/JavaScript). Lots of JS frameworks bias you away from form data.

                                1. 2

                                  I’ve found the git reflog to pretty much always have the info that I need to get out of trouble. There seems to be quite a few start on Github, so not trying to put this project down. But legitimately, what does this give someone who can use the reflog to reverse unwanted changes?

                                  1. 2

                                    What if you didn’t commit those changes and just wrote to the file and then undoed with your editor? This has happened to me before though I’ve just used Neovim’s tree history to go back to my changes.

                                    1. 1

                                      I think the target audience (judging by the author’s comment about reflog), is people that can’t or don’t want to use reflog. Seems legitimate enough.

                                      1. 8

                                        Originally, my reasoning was that it covers cases where you simply didn’t commit often enough. reflog won’t help at all if there’s no Git object. In hindsight, the prospect of never using reflog again is a sweet one.

                                    1. 6

                                      Intriguing idea, but the installation is pretty rough at the moment. This is totally understandable as it’s only been out in the wild for three days, but it exceeded my curiosity to effort threshold.

                                      By default it needs both Homebrew and Ports on Mac. I’m not going to install Ports just to get libcrypto++, so I manually installed it in /usr/local, but after adding the install path to the CFLAGS/LFLAGS in the root Makefile it still failed to find libcrypto at a later stage. It turns out the compiler arguments are hard-coded into the source code. I noped out at that point.

                                      1. 4
                                        1. 2

                                          Have you considered making an issue? I’m not trying to give you work - I can make a stub issue if it’s not objectionable to you.

                                          1. 1

                                            I considered it, but assumed they would already be aware. If you think it’s worthwhile though, go for it.

                                          2. 2
                                          1. 9

                                            In C, this feature is called Bit Fields. It’s been part of the language since the 1980’s.

                                            1. 9

                                              Zig’s bitfields are a bit better, though. You can take the address of a bitfield. Andrew wrote a blog post about the feature a few years ago: https://andrewkelley.me/post/a-better-way-to-implement-bit-fields.html

                                              1. 3

                                                That’s an interesting design decision. You can’t pass the address of a bool bitfield to a function expecting a bool pointer, because the bit offset is part of the type.

                                                1. 2

                                                  That’s true. But this allows any variable or struct member to be passed by reference, which is important for writing generic functions that are polymorphic over argument types.

                                                2. 2

                                                  That’s a great link: a list of all the problems with C bitfields and how Zig fixes them.

                                                3. 3

                                                  At first I thought this, and then I read the article and thought “oh, they mean like pascal sets” and then I read it again and realized that no they really are just describing an API that uses a struct of bitfields rather than an integer.

                                                  Now obviously a struct of bitfields is superior to an untyped uint, but it lacks the flexibility that comes from getting confused about what int means what :D

                                                  I think that the intended benefit of the API being an int is twofold: It is very easy to do things that result in structs being passed in memory even if they fit in registers (clang even has an attribute you can apply to a struct to force register passing), but you can very easily do set operations like unions, intersections, etc. Whether that trade off is worth it I think depends on the use case, but I think we can agree that it would be nicer if we could have set enums in C. There’s no reason for Zig not to have them as it isn’t confined to whatever can get through a standards body.

                                                  1. 2

                                                    Zig should extend the bitwise operations to work on structs and arrays of ints. That would eliminate the objection that you can’t do unions/intersections on bitfield structs.

                                                    If you have this language extension in Zig, then Pascal style enum sets become a coding idiom, rather than a new builtin language feature. You could write a helper function (using comptime zig) that takes an enum type as an argument, finds the largest enum in the type using @typeinfo, then returns the appropriate array type to use as a set.

                                                    There are many precedents for extended bitwise operators, all the way from APL in the 1960’s and Fortran 90 (where scalar operations are extended to work on arrays), to vector instructions in modern CPUs that would directly support this extension.

                                                  2. 1

                                                    From the article:

                                                    mask |= WGPUColorWriteMask_Blue; // set blue bit

                                                    This all works, people have been doing it for years in C, C++, Java, Rust, and more. In Zig, we can do better.

                                                    1. 4

                                                      Exactly, they say in C we use integers with various mask bits, and then propose a “superior” solution in zig. Which is to just use bitfields, as have existed in C, C++, [Object] Pascal, … forever.

                                                      The point the I think @doug-moen is making is that the authors are claiming that the masked integer is an inherent requirement of C, etc and then that Zig can do better because of this special Zig only feature, pretending that the identical feature does not exist in C, and ignoring that maybe there are other reasons that this (terrible :D) style of API exists.

                                                      1. 7

                                                        Except bitfields in C are bristling with footguns, most importantly that the mapping to actual bits is implementation-dependent, which makes bitfields incompatible with any C API that defines bits using masks, or any hardware interface.

                                                        From the GCC docs:

                                                        The order of allocation of bit-fields within a unit (C90 6.5.2.1, C99 and C11 6.7.2.1).

                                                        • Determined by ABI.

                                                        The alignment of non-bit-field members of structures (C90 6.5.2.1, C99 and C11 6.7.2.1).

                                                        • Determined by ABI.

                                                        Having an actual useable implementation of bitfields is awesome for Zig.

                                                        1. 2

                                                          Is Zig’s packing of bitfield independent of ABI?

                                                          And why does it matter unless the bitfield leaves your program’s address space? (Just about anything other than byte arrays leaving your address space is ABI dependent. Why single out bitfields?)

                                                          1. 1

                                                            It matters because if you’re interfacing with nearly any existing C API, which specifies bits using integers with masks, you cannot declare a C bit-field compatible with that API without knowing exactly how your compiler’s ABI allocates bits.

                                                            1. 2

                                                              Yes. How is that different from a struct?

                                                              1. 1

                                                                You know exactly how it’s laid out, because that’s defined by the platform ABI. There is no mystery here.

                                                                1. 1

                                                                  Not a mystery, but highly platform-specific. So if you’re interfacing with anything using explicit bit masks, that makes it a really bad idea to use in cross-platform code, or in code you want to survive a change in CPU architecture. As a longtime Apple developer I’ve been through four big architecture changes in my career — 68000 to PowerPC to PPC 64-bit to x86 32/64 to ARM 32/64 — each with a different ABI. And over time I’ve had to write code compatible with Windows, Linux and ESP32.

                                                            2. 1

                                                              Yes, the platform defines the ABI for that platform and the compiler is expected to follow that ABI. So bit fields in C/C++ can (and are) used in plenty of APIs.

                                                              I get that you may like Zig, and prefer Zig’s approach to specifying things, but that doesn’t mean you get to spout incorrect information. The platform ABI defines how strict an are laid out. It defines the alignment and size of types. It defines the order of function arguments and where those arguments are placed. So claiming bitfields are footguns or that they are unusable because they .. follow the platform ABI? is like saying that function calls are footguns as the registers and stack slots used in a call are defined by the platform ABI.

                                                              If Zig is choosing to ignore the platform ABI, then Zig is the thing that is at fault if an API doesn’t work. If there is an API outside of Zig that uses a bit field then Zig will need some way to say that bits should be laid out in the manner specified by the platform ABI.

                                                              1. 2

                                                                I’m not actually a Zig fan (too low level for me) but I like its treatment of bitfields.

                                                                You seem to be missing my point. A typical API (or hardware interface) will define flags as, say, a 32-bit int, where a particular flag X is stored in the LSB. You can’t represent that using C bit-fields in a compiler- or platform-independent way, because you don’t know where the compiler will put a particular field. Each possible implementation will require a different bit-field declaration. In practice this is more trouble than it’s worth, which is why APIs like POSIX or OpenGL avoid bit-fields and use integers with mask constants instead.

                                                                Yes, if you declare your own bit-field based APIs they will work. They’re just not interoperable with code or hardware that uses a fixed definition of where the bits go.

                                                                1. 1

                                                                  You can’t represent that using C bit-fields in a compiler- or platform-independent way, because you don’t know where the compiler will put a particular field.

                                                                  No, that is all well defined as part of the platform specified ABI. You know exactly where they go, and exactly what the packing is. Otherwise the whole concept of having an API is broken. POSIX and GL use words and bit fields because many concepts benefit from masking, or to make interop with different languages easier - the API definition is literally just “here’s a N bit value”.

                                                      1. 3

                                                        On the technical side: I’m working through Effective C.

                                                        I remain scared of C programming and I’d like to not be: so far I’m finding this a really solid way to learn C fundamentals without spending three chapters on “hello world”.

                                                        I’m also reading Le otto montagne, albeit very slowly as I’m learning Italian and I have to work with a dictionary. Despite the slow going I’m enjoying the description of a childhood spent in the rural, mountainous areas of Italy.

                                                        When I can’t manage those, I’m re-reading Gideon the Ninth because the third book in the series is out soon. Space Necromancers! Need I say more?

                                                        1. 2

                                                          I have the same feelings about c, please update as you make progress!

                                                          And I agree, Gideon the Ninth … is wild! - in a completely nutso originally good way.

                                                          1. 2

                                                            And I agree, Gideon the Ninth … is wild! - in a completely nutso originally good way.

                                                            I got it for my birthday back in June. Just finished the second one, looking forward to the third as well!

                                                        1. 3

                                                          Hey! Recently this was shared and it explains quite well many things that helped with SOC2 certification, but also many things that just make sense for security: https://fly.io/blog/soc2-the-screenshots-will-continue-until-security-improves/

                                                          I’m myself in a similar situation to you but much more advanced (it’s been 3 years now) and this article is definitely a gold mine, it has many links to other articles that are also awesome.

                                                          The things that I think make sense are the basics:

                                                          • business continuity and disaster recovery (backups, periodically tested, failover tests, HA, …)
                                                          • IAM and access control from the operators (is the DBA able to wipe the DB all by himself? If yes can you prove he did it and not a hacker ?)
                                                          • network access (firewalling, bastion-like aka Teleport, VPN-like aka Tailscale)
                                                          • security scans (periodic vulnerability scanning of your Infra)
                                                          • training ( for the whole company, phishing trainings etc…) …

                                                          I think stuff like yubikeys are very interesting but in the longer run. The thing that helped me a lot is reading the CERTs reports from big companies. They basically detail the main threats they see and most of the time, they are phishing, no firewalling, no regular patching of systems etc… when you cover all the big threats listed, it makes sense to have a risk based approach to understand what is your biggest risk and how to cover it.

                                                          Happy to talk over PM if you want :)

                                                          1. 2

                                                            Another great article (I’m noticing a trend towards some of the things I’m worried about and things SOC covers - not that I want SOC, but general business practices to keep in mind.

                                                            Your bullet points are great! I’ll definitely be making a to-do list from them.

                                                            1. 1

                                                              network access (firewalling, bastion-like aka Teleport, VPN-like aka Tailscale)

                                                              I’ll throw Twingate out here as worth a look for people on small teams or who want something simple to setup that just works. I’m in a small engineering team using AWS, and Twingate has been magical for me. I was staring down trying to setup the AWS VPN thing but with Twingate I was up an running in under and hour, and it has been very low maintenance and very simple to use. We’re on a free tier (up to 5 users; we have only 3 engineers).

                                                              The way I’m using it is we have instances in a VPC in AWS (databases, web server etc…) and I just deployed a dirt-cheap EC2 instance (a t2.micro at $8/m IIRC) running the Twingate AMI into the VPC. It connected itself to Twingate’s backend and from there all I have to do is go into the Twingate UI and create entries for the DNS records or IP addresses I want to expose from inside the VPC. Engineers install Twingate on their laptop, authorise via Google SSO (no SSO tax!) and that’s it. I can securely connect to my DB for admin without exposing it to the public internet (only egress is required). This probably isn’t amazing for anyone who knows what they’re doing, but I love how simple it was for me to setup.

                                                              I’ve no affiliation with them, I swear :)

                                                            1. 6

                                                              Keep it as simple as you possibly can. Everything you add is a potential for intrusion. That of course includes any cloud/third party software/service and also the VPN. Don’t accidentally open ways in just for saving yourself three seconds a day. You use a password manager and consider hardware tokens with the same trade-off so don’t do the opposite in other scenarios.

                                                              Write docs and keep them up to date! Yes, you are small, so, you don’t need to have the big documentation, but have some kind of document, be it a Google Doc thing or simply a repo with markdown or plain text where you both document what you do and why. Especially important to do “one time” things or things that you rarely do. It doesn’t have to be some fancy documentation, copy and paste, cues, etc.

                                                              A typical scenario of messing up early is to change things and forgetting about something that allows for a way in.

                                                              Make it a habit to call for important stuff, rather than email to reduce the likelihood of attacks on that front.

                                                              For Azure: Why exactly do you use that as a small company? It feels like either you wanna go for a more complete solution, Heroku style (depending on what you need) or a simpler solution (a vserver somewhere). Don’t just use some cloud service if you don’t have a full time person with dedicated knowledge in the field or it eventually will cause issues, as things progress. Also since you say “We are moving…” what’s the rationale?

                                                              Use some form of Single SIgn on, and preferably only use one way to authenticate. You are likely to need to have more than one, so a password manager makes sense, but don’t let that be the reason you make all sorts of logins.

                                                              Make sure you have proper off-boarding in place as early as possible. That largely involves keeping a list of who can do what where. Make sure you keep it up to date, so you can properly restrict access. Not because you fear wrong-doing, but because it’s yet another way in for an attacker.

                                                              And to re-iterate: Make sure you keep everything as simple as possible. Don’t increase attack surface out of a whim. Try out new things on a separate server/network/cloud project/… Do not be tempted to just try it with anything production, that includes desktops/laptops.

                                                              Have a threat model! You are small, it is simple. It’s also psychologically important. There’s so many young/small companies that spend months on some mostly theoretical security issue, in the context of already being breached and one system containing some password to decrypt some information magically not being accessible to the attacker, when their front gate is completely open. It makes you feel good, cause you are working on security and yes, you should eventually get there, but get your priorities straight by analyzing what you have and where attackers get in.

                                                              Don’t assume the likeliest way in is someone finding a 0-day in OpenSSH or exploiting the OS’s network stack using ICMP to ruin a small company. But also don’t assume that “nobody will know the IP of this system”. Don’t work by hiding things, don’t practice security by obscurity.

                                                              So in short: Keep your attack surface small, have people know what they are doing. Practice security, rather than security theater.

                                                              1. 1

                                                                I like the thoughts towards documentation. Now we just have to work more on developing the habit.

                                                                As for Azure: Our software is a .Net/SQL Server stack and we were going to use O365 anyways. We’re migrating away from a datacenter to Azure for a couple reasons; our infrastructure was handled by a different org when my company was part of a larger company, we can eventually trim down our infra needs to be betters suited for a crowd.

                                                                Simplicity was one of my motivations for lots of this job, but it’s nice to hear it reinforced in a security context.

                                                              1. 8

                                                                This article is framed toward “one day we’d like to pass a SOC2 audit”, but is also a good intro to some practices you should be doing, if you aren’t already. For 2FA I personally wouldn’t worry too much about forcing people to use hardware keys until you’ve dealt with other more low-hanging fruit – a TOTP authenticator app on the user’s phone is fine for most use cases and threat models.

                                                                1. 1

                                                                  This article is great.

                                                                  We’re going to use 2FA, but I think one of the reasons I was thinking about a hardware key was to have a backup that didn’t depend entirely on 2 personal phones (if I’m on vacation and the other person breaks their phone…. then what?).

                                                                1. 22

                                                                  The repo owner, John McFarlane, is the primary developer for PanDoc. Having to wrestle with Markdown (and multiple flavors no less) led to him developing CommonMark (which is some small tweaks from Markdown but with a spec!).

                                                                  Which is all to say this is probably pretty well thought out, but take a look.

                                                                  1. 1

                                                                    The folks behind rr then developed pernosco. I always thought it was neat, but don’t program in languages where i could take advantage of it.

                                                                    1. 6

                                                                      Host here. Before interviewing Ron I would have never thought they would just leave a REPL running on a space craft!

                                                                      1. 3

                                                                        Looking forward to this episode!

                                                                        The space + homoiconic language combination makes me want an episode with Chuck Moore. That could be an interview for the ages!

                                                                        https://en.m.wikipedia.org/wiki/Charles_H._Moore

                                                                        Edit: actually I think the history of Forth goes back further than the Wikipedia article states.

                                                                        https://www.forth.com/resources/forth-programming-language/

                                                                        In 1965, he moved to New York City to become a free-lance programmer. Working in Fortran, Algol, Jovial, PL/I and various assemblers, he continued to use his interpreter as much as possible, literally carrying around his card deck and recoding it as necessary.

                                                                        Weird to think of carrying around your programming language in a briefcase!

                                                                        1. 1

                                                                          I have been wondering about how to do a forth episode. Is Chuck Moore still active in any capacity?

                                                                          1. 1

                                                                            Maybe? He has talked with some forth groups recently (https://m.youtube.com/watch?v=dI0soDMg28Q&t=357s).

                                                                            1. 1

                                                                              It looks like there are a few interviews in the past couple years, but he is getting up there in years. I think part of the reason why he wrote color forth to use text color semantically was because his eyes weren’t in great shape and that was the late 90s, early 00s.


                                                                              Edit:

                                                                              Great moment from an interview from a couple years ago:

                                                                              https://www.youtube.com/watch?v=Qq0d6k7oi_A&t=3356s

                                                                        1. 3

                                                                          A lot of the comments upstream are conflating “preventing two similar identifiers with different names” and “resolving two similar identifiers to the same keyword/variable/type,” which is unfortunate. Most of the pro-insensitivity comments boil down to “it keeps me from making mistakes by having two variables/properties/whatever with similar but not identical names and getting confused” (is that really such a problem?) but that benefit can be easily kept without also forcibly coalescing all similar identifiers.

                                                                          1. 3

                                                                            that benefit can be easily kept without also forcibly coalescing all similar identifiers

                                                                            And? In what situation could you possibly want fooBar and foo_bar to be different things?

                                                                            1. 9

                                                                              Not arguing for either side here, but your question made me think of words like in_box, is_land, turn_table, high_light, or take_out.

                                                                              1. 4

                                                                                To expand on the point @mqudsi seems to be making and @jibsen’s interesting examples, I think there are just many kinds of identifier. How much fooBar and fOO_BAR “look the same” is subjective. It depends upon fonts (how big ‘_’ looks). How much to leverage looking different to denote kind is also subjective. One does not always have an IDE on hand when interpreting code. How often that happens is also subjective. Even with an IDE, redundant visual cues can often lessen confusion - how much so, also subjective. What kinds confuse the most is also subjective.

                                                                                Type is one important aspect. FORTRAN’s old starts with “[I-N] => integer type” is the original Hungarian notation alluded to earlier. There are many aspects, though, and Nim has maybe more than most (at least packages, modules, const, let, var, func, proc, iterator, generics, template, macros). Is global search/replace change of such a redundant cue a pain? It can be. Is the trade off worth it? Also hard to know.

                                                                                Nim itself recommends leading capitals for type names. LOUD_CASE for constants is all over the stdlib. Caps used to be like radio operator alphabets. Low-res devices did not select LOUD_CASE randomly. It was viewed by many as CLEAR_CASE. (I’d agree it might be “clarity wasted on constant-hood” whose origins may be tied to the hackiness of the C preprocessor.) Probably before it had good IDE tools, Nim used to put a ‘T’ prefix in front of type names and a ‘P’ in front of pointers.

                                                                                In the Ada case, it was viewed as not possible to render lowercase on some US DoD devices. So, HW limits took away case sensitivity as a choice. Many I know think of case insensitivity, even that of CP/M|DOS heir Windows file systems, as a throw back to ancient technology. There is usually a just so story to support the choice in the modern age when the “real” reason may simply be backward compatibility to the 1970s.

                                                                                Various studies try to assess all this subjective stuff in the population. Besides being very sensitive to subpopulations studied, I think most confuse “interactive” with “more static reading” contexts (in both file names and in programming). Most text is read much more than it is written which is at least partly why TAB-completion is popular.

                                                                                In short, I don’t think the “it all depends”/subjectivity should be very contentious yet it often is in Nim conversations. This is partly what leads me to think of the Nim community as a biased pool of evaluators and why I tried to include a broader community like Lobste.rs. Implementing an “import time” feature with something that spills over to wh_Ile also seems bad.

                                                                                1. 2

                                                                                  There have been times I’ve been in a code base and seen that 1 naming convention was used for 1 set of objects (records from the database or unprocessed data) and a second convention was for another set of data that was conceptually related.

                                                                                  It maybe wasn’t the best code, but it at least worked and made sense.

                                                                                  1. 2

                                                                                    I wasn’t clear/you misunderstood. I was saying you could pick one and be forbidden from using the other without allowing both and coalescing them. That would prohibit confusing names but force stylistic continuity, enable codebase grepping, etc.

                                                                                    1. 2

                                                                                      You can forbid underscores in identifiers, which disables snake_case (I actually suggested that in the linked issue and got a lot of downthumbs). However, it won’t solve the situation of itemid versus itemId versus itemID, and I’m pretty sure distinguishing these is an AI-complete problem.

                                                                                    2. 1

                                                                                      The linked thread highlights some cases with difficulties wrapping C libraries which have unavoidable collisions when following upstream naming.

                                                                                      1. 3

                                                                                        Okay. I prefer to rename wrapped identifiers anyway.

                                                                                  1. 7

                                                                                    How will you ensure that you can still build zig from sources in the future?

                                                                                    1. 27

                                                                                      By forever maintaining two implementations of the compiler - one in C, one in Zig. This way you will always be able to bootstrap from source in three steps:

                                                                                      1. Use system C compiler to build C implementation from source. We call this stage1. stage1 is only capable of outputting C code.
                                                                                      2. Use stage1 to build the Zig implementation to .c code. Use system C compiler to build from this .c code. We call this stage2.
                                                                                      3. Use stage2 to build the Zig implementation again. The output is our final zig binary to ship to the user. At this point, if you build the Zig implementation again, you get back the same binary.

                                                                                      https://github.com/ziglang/zig-bootstrap

                                                                                      1. 7

                                                                                        I’m curious, is there some reason you don’t instead write a backend for the Zig implementation of the compiler to output C code? That seems like it would be easier than maintaining an entirely separate compiler. What am I missing?

                                                                                        1. 2

                                                                                          That is the current plan as far as I’m aware

                                                                                          1. 1

                                                                                            The above post says they wanted two separate compilers, one written in C and one in Zig. I’m wondering why they just have one compiler written in Zig that can also output C code as a target. Have it compile itself to C, zip up the C code, and now you have a bootstrap compiler that can build on any system with a C compiler.

                                                                                            1. 2

                                                                                              In the above linked Zig Roadmap video, Andrew explains that their current plan is halfway between what you are saying and what was said above. They plan to have the Zig compiler output ‘ugly’ C, then they will manually clean up those C files and version control them, and as they add new features to the Zig source, they will port those features to the C codebase.

                                                                                              1. 2

                                                                                                I just watched this talk and learned a bit more. It does seem like the plan is to use the C backend to compile the Zig compiler to C. What interests me though is there will be a manual cleanup process and then two separate codebases will be maintained. I’m curious why an auto-generated C compiler wouldn’t be good enough for bootstrapping without manual cleanup.

                                                                                                1. 7

                                                                                                  Generated source code usually isn’t considered to be acceptable from an auditing/chain of trust point of view. Don’t expect the C code generated by the Zig compiler’s C backend to be normal readable C, expect something closer to minified js in style but without the minification aspect. Downloading a tarball of such generated C source should be considered equivalent to downloading an opaque binary to start the bootstrapping process.

                                                                                                  Being able to trust a compiler toolchain is extremely important from a security perspective, and the Zig project believes that this extra work is worth it.

                                                                                                  1. 2

                                                                                                    That makes a lot of sense! Thank you for the clear and detailed response :)

                                                                                                  2. 2

                                                                                                    It would work fine, but it wouldn’t be legitimate as a bootstrappable build because the build would rely on a big auto-generated artifact. An auto-generated artifact isn’t source code. The question is: what do you need to build Zig, other than source code?

                                                                                                    It could be reasonable to write and maintain a relatively simple Zig interpreter that’s just good enough to run the Zig compiler, if the interpreter is written in a language that builds cleanly from C… like Lua, or JavaScript using Fabrice Bellard’s QuickJS.

                                                                                                    1. 1

                                                                                                      Except that you can’t bootstrap C, so you’re back where you started?

                                                                                                      1. 2

                                                                                                        The issue is not to be completely free of all bootstrap seeds. The issue is to avoid making new ones. C is the most widely accepted and practical bootstrap target. What do you think is a better alternative?

                                                                                                        1. 1

                                                                                                          C isn’t necessarily a bad choice today, but I think it needs to be explicitly acknowledged in this kind of discussion. C isn’t better at being bootstrapped than Zig, many just happen to have chosen it in their seed.

                                                                                                          A C compiler written in Zig or Rust to allow bootstrapping old code without encouraging new C code to be written could be a great project, for example.

                                                                                                          1. 5

                                                                                                            This is in fact being worked on: https://github.com/Vexu/arocc

                                                                                              2. 1

                                                                                                Or do like Golang. For bootstrap you need to:

                                                                                                1. Build Go 1.4 (the last one made in C)
                                                                                                2. Build the latest Go using the compiler from step 1
                                                                                                3. Build the latest Go using the compiler from step 2
                                                                                              3. 3

                                                                                                Build the Zig compiler to Wasm, then run it to cross-compile the new compiler. Wasm is forever.

                                                                                                1. 11

                                                                                                  I certainly hope that’s true, but in reality wasm has existed for 5 years and C has existed for 50.

                                                                                                  1. 2

                                                                                                    The issue is building from maintained source code with a widely accepted bootstrapping base, like a C compiler.

                                                                                                    The Zig plan is to compile the compiler to C using its own C backend, once, and then refactor that output into something to maintain as source code. This compiler would only need to have the C backend.

                                                                                                    1. 1

                                                                                                      I mean, if it is, then it should have the time to grow some much needed features.

                                                                                                      https://dl.acm.org/doi/10.1145/3426422.3426978

                                                                                                    2. 1

                                                                                                      It’s okay if you don’t know because it’s not your language, but is this how Go works? I know there’s some kind of C bootstrap involved.

                                                                                                      1. 4

                                                                                                        The Go compiler used to be written in C. Around 1.4 they switched to a Go compiler written in Go. If you were setting up an entirely new platform (and not use cross compiling), i believe the recommended steps are still get a C compiler working, build Go 1.4, then update from 1.4 to latest.

                                                                                                    3. 2

                                                                                                      How do we build C compilers from source?

                                                                                                      1. 3

                                                                                                        Bootstrapping a C compiler is usually much easier than bootstrapping a chain of some-other-language compilers.

                                                                                                        1. 4

                                                                                                          Only if you accept a c compiler in your bootstrap seed and don’t accept a some-other-language compiler in your seed.

                                                                                                          1. 3

                                                                                                            Theoretically. But from a practical point of view? Yes, there are systems like Redox (Rust), but in most cases the C compiler is an inevitable piece of the puzzle (the bootstrapping chain) when building an operating system. And in such cases, I would (when focused on simplicity) rather prefer a language that depends just on C (that I already have) instead of a sequence of previous versions of its own compilers. (and I say that as someone, who does most of his work in Java – which is terrible from the bootstrapping point-of-view)

                                                                                                            However, I do not object much against the dependence on previous versions of your compiler. It is often the way to go, because you want to write your compiler in a higher language instead of some old-school C and because you create a language and you believe in its qualities, you use it also for writing the compiler. What I do not understand is why someone (not this particular case, I saw this pattern before many times) present the “self-hosted” as an advantage…

                                                                                                            1. 2

                                                                                                              The self-hosted Zig compiler provides much faster compile times and is easier to hack, allowing language development to move forward. In theory the gains could be done in a different language, but some of the kind of optimizations used are exactly the kind of thing Zig is good at. See this talk for some examples: https://media.handmade-seattle.com/practical-data-oriented-design/.

                                                                                                            2. 1

                                                                                                              But you could make a C compiler (or a C interpreter) from scratch relatively easily.

                                                                                                      1. 1

                                                                                                        I wonder how you say this, because it looks like the word 坐 or “sit” in Chinese.

                                                                                                        1. 1

                                                                                                          I’m guessing from google translate, that it is 作. You can listen to google say it there:

                                                                                                          https://translate.google.com/?sl=zh-CN&tl=en&text=%E4%BD%9C&op=translate

                                                                                                          1. 2

                                                                                                            I think it is this character 做 that means “do, make, produce” where 作 is more on “rise, grow”. Both characters are pronounced the same (same tone).

                                                                                                        1. 1

                                                                                                          Job interviews

                                                                                                          1. 1

                                                                                                            good luck!

                                                                                                          1. 10

                                                                                                            How many ex-CTOs do you know that keep coding like that? I find this inspiring.

                                                                                                            1. 10

                                                                                                              It’s even better: he quit his CTO job to go back to being an individual contributor.

                                                                                                              1. 7

                                                                                                                He talks about it on the Stack Overflow podcast. The important part was that he helped create a culture that would let that happen.

                                                                                                                1. 2

                                                                                                                  Yeah, imagine giving up all that power because you like hacking better. I don’t know anybody else who is ready to let go once they had a taste.

                                                                                                                  1. 2

                                                                                                                    A job at its best should be an arrangement where both parties get something they want out of it. For some people (like myself), that may not be status, but some other attribute.

                                                                                                                    1. 1

                                                                                                                      I know several people who made similar moves (from VP or C-level to IC or line management), but at smaller companies.