Transformers. It’s literally a transformative technology, and I’m really curious to understand how it works, and why it performs so much better than almost every preceding method.
What does “transformers” mean here? I know it is not Autobots vs. Decepticons ;-) so what sub-field is that in?
I would guess Transformers in Machine Learning
Just a wild guess, but maybe they’re referring to Monad Transformers?
I feel like @mperham might have some worthwhile input as a developer that is: financially successful, and supports open source. Any ideas?
I agree with the principles of the argument which is why I’ve been spending more time looking at htmx.
I’ve found the git reflog to pretty much always have the info that I need to get out of trouble. There seems to be quite a few start on Github, so not trying to put this project down. But legitimately, what does this give someone who can use the reflog to reverse unwanted changes?
What if you didn’t commit those changes and just wrote to the file and then undoed with your editor? This has happened to me before though I’ve just used Neovim’s tree history to go back to my changes.
I think the target audience (judging by the author’s comment about reflog), is people that can’t or don’t want to use reflog. Seems legitimate enough.
Originally, my reasoning was that it covers cases where you simply didn’t commit often enough.
reflog won’t help at all if there’s no Git object. In hindsight, the prospect of never using reflog again is a sweet one.
Intriguing idea, but the installation is pretty rough at the moment. This is totally understandable as it’s only been out in the wild for three days, but it exceeded my curiosity to effort threshold.
By default it needs both Homebrew and Ports on Mac. I’m not going to install Ports just to get libcrypto++, so I manually installed it in /usr/local, but after adding the install path to the CFLAGS/LFLAGS in the root Makefile it still failed to find libcrypto at a later stage. It turns out the compiler arguments are hard-coded into the source code. I noped out at that point.
Filed it for you: https://github.com/jzimmerman/langcc/issues/3
Have you considered making an issue? I’m not trying to give you work - I can make a stub issue if it’s not objectionable to you.
I considered it, but assumed they would already be aware. If you think it’s worthwhile though, go for it.
Issue is closed with a comment - https://github.com/jzimmerman/langcc/issues/3#issuecomment-1256510884
Zig’s bitfields are a bit better, though. You can take the address of a bitfield. Andrew wrote a blog post about the feature a few years ago: https://andrewkelley.me/post/a-better-way-to-implement-bit-fields.html
That’s an interesting design decision. You can’t pass the address of a bool bitfield to a function expecting a bool pointer, because the bit offset is part of the type.
That’s true. But this allows any variable or struct member to be passed by reference, which is important for writing generic functions that are polymorphic over argument types.
At first I thought this, and then I read the article and thought “oh, they mean like pascal sets” and then I read it again and realized that no they really are just describing an API that uses a struct of bitfields rather than an integer.
Now obviously a struct of bitfields is superior to an untyped uint, but it lacks the flexibility that comes from getting confused about what int means what :D
I think that the intended benefit of the API being an int is twofold: It is very easy to do things that result in structs being passed in memory even if they fit in registers (clang even has an attribute you can apply to a struct to force register passing), but you can very easily do set operations like unions, intersections, etc. Whether that trade off is worth it I think depends on the use case, but I think we can agree that it would be nicer if we could have set enums in C. There’s no reason for Zig not to have them as it isn’t confined to whatever can get through a standards body.
Zig should extend the bitwise operations to work on structs and arrays of ints. That would eliminate the objection that you can’t do unions/intersections on bitfield structs.
If you have this language extension in Zig, then Pascal style enum sets become a coding idiom, rather than a new builtin language feature. You could write a helper function (using comptime zig) that takes an enum type as an argument, finds the largest enum in the type using @typeinfo, then returns the appropriate array type to use as a set.
There are many precedents for extended bitwise operators, all the way from APL in the 1960’s and Fortran 90 (where scalar operations are extended to work on arrays), to vector instructions in modern CPUs that would directly support this extension.
From the article:
mask |= WGPUColorWriteMask_Blue; // set blue bit
This all works, people have been doing it for years in C, C++, Java, Rust, and more. In Zig, we can do better.
Exactly, they say in C we use integers with various mask bits, and then propose a “superior” solution in zig. Which is to just use bitfields, as have existed in C, C++, [Object] Pascal, … forever.
The point the I think @doug-moen is making is that the authors are claiming that the masked integer is an inherent requirement of C, etc and then that Zig can do better because of this special Zig only feature, pretending that the identical feature does not exist in C, and ignoring that maybe there are other reasons that this (terrible :D) style of API exists.
Except bitfields in C are bristling with footguns, most importantly that the mapping to actual bits is implementation-dependent, which makes bitfields incompatible with any C API that defines bits using masks, or any hardware interface.
From the GCC docs:
The order of allocation of bit-fields within a unit (C90 184.108.40.206, C99 and C11 220.127.116.11).
- Determined by ABI.
The alignment of non-bit-field members of structures (C90 18.104.22.168, C99 and C11 22.214.171.124).
- Determined by ABI.
Having an actual useable implementation of bitfields is awesome for Zig.
Is Zig’s packing of bitfield independent of ABI?
And why does it matter unless the bitfield leaves your program’s address space? (Just about anything other than byte arrays leaving your address space is ABI dependent. Why single out bitfields?)
It matters because if you’re interfacing with nearly any existing C API, which specifies bits using integers with masks, you cannot declare a C bit-field compatible with that API without knowing exactly how your compiler’s ABI allocates bits.
You know exactly how it’s laid out, because that’s defined by the platform ABI. There is no mystery here.
Not a mystery, but highly platform-specific. So if you’re interfacing with anything using explicit bit masks, that makes it a really bad idea to use in cross-platform code, or in code you want to survive a change in CPU architecture. As a longtime Apple developer I’ve been through four big architecture changes in my career — 68000 to PowerPC to PPC 64-bit to x86 32/64 to ARM 32/64 — each with a different ABI. And over time I’ve had to write code compatible with Windows, Linux and ESP32.
Yes, the platform defines the ABI for that platform and the compiler is expected to follow that ABI. So bit fields in C/C++ can (and are) used in plenty of APIs.
I get that you may like Zig, and prefer Zig’s approach to specifying things, but that doesn’t mean you get to spout incorrect information. The platform ABI defines how strict an are laid out. It defines the alignment and size of types. It defines the order of function arguments and where those arguments are placed. So claiming bitfields are footguns or that they are unusable because they .. follow the platform ABI? is like saying that function calls are footguns as the registers and stack slots used in a call are defined by the platform ABI.
If Zig is choosing to ignore the platform ABI, then Zig is the thing that is at fault if an API doesn’t work. If there is an API outside of Zig that uses a bit field then Zig will need some way to say that bits should be laid out in the manner specified by the platform ABI.
I’m not actually a Zig fan (too low level for me) but I like its treatment of bitfields.
You seem to be missing my point. A typical API (or hardware interface) will define flags as, say, a 32-bit int, where a particular flag X is stored in the LSB. You can’t represent that using C bit-fields in a compiler- or platform-independent way, because you don’t know where the compiler will put a particular field. Each possible implementation will require a different bit-field declaration. In practice this is more trouble than it’s worth, which is why APIs like POSIX or OpenGL avoid bit-fields and use integers with mask constants instead.
Yes, if you declare your own bit-field based APIs they will work. They’re just not interoperable with code or hardware that uses a fixed definition of where the bits go.
You can’t represent that using C bit-fields in a compiler- or platform-independent way, because you don’t know where the compiler will put a particular field.
No, that is all well defined as part of the platform specified ABI. You know exactly where they go, and exactly what the packing is. Otherwise the whole concept of having an API is broken. POSIX and GL use words and bit fields because many concepts benefit from masking, or to make interop with different languages easier - the API definition is literally just “here’s a N bit value”.
On the technical side: I’m working through Effective C.
I remain scared of C programming and I’d like to not be: so far I’m finding this a really solid way to learn C fundamentals without spending three chapters on “hello world”.
I’m also reading Le otto montagne, albeit very slowly as I’m learning Italian and I have to work with a dictionary. Despite the slow going I’m enjoying the description of a childhood spent in the rural, mountainous areas of Italy.
When I can’t manage those, I’m re-reading Gideon the Ninth because the third book in the series is out soon. Space Necromancers! Need I say more?
I have the same feelings about c, please update as you make progress!
And I agree, Gideon the Ninth … is wild! - in a completely nutso originally good way.
And I agree, Gideon the Ninth … is wild! - in a completely nutso originally good way.
I got it for my birthday back in June. Just finished the second one, looking forward to the third as well!
Hey! Recently this was shared and it explains quite well many things that helped with SOC2 certification, but also many things that just make sense for security: https://fly.io/blog/soc2-the-screenshots-will-continue-until-security-improves/
I’m myself in a similar situation to you but much more advanced (it’s been 3 years now) and this article is definitely a gold mine, it has many links to other articles that are also awesome.
The things that I think make sense are the basics:
I think stuff like yubikeys are very interesting but in the longer run. The thing that helped me a lot is reading the CERTs reports from big companies. They basically detail the main threats they see and most of the time, they are phishing, no firewalling, no regular patching of systems etc… when you cover all the big threats listed, it makes sense to have a risk based approach to understand what is your biggest risk and how to cover it.
Happy to talk over PM if you want :)
Another great article (I’m noticing a trend towards some of the things I’m worried about and things SOC covers - not that I want SOC, but general business practices to keep in mind.
Your bullet points are great! I’ll definitely be making a to-do list from them.
network access (firewalling, bastion-like aka Teleport, VPN-like aka Tailscale)
I’ll throw Twingate out here as worth a look for people on small teams or who want something simple to setup that just works. I’m in a small engineering team using AWS, and Twingate has been magical for me. I was staring down trying to setup the AWS VPN thing but with Twingate I was up an running in under and hour, and it has been very low maintenance and very simple to use. We’re on a free tier (up to 5 users; we have only 3 engineers).
The way I’m using it is we have instances in a VPC in AWS (databases, web server etc…) and I just deployed a dirt-cheap EC2 instance (a
t2.micro at $8/m IIRC) running the Twingate AMI into the VPC. It connected itself to Twingate’s backend and from there all I have to do is go into the Twingate UI and create entries for the DNS records or IP addresses I want to expose from inside the VPC. Engineers install Twingate on their laptop, authorise via Google SSO (no SSO tax!) and that’s it. I can securely connect to my DB for admin without exposing it to the public internet (only egress is required). This probably isn’t amazing for anyone who knows what they’re doing, but I love how simple it was for me to setup.
I’ve no affiliation with them, I swear :)
Keep it as simple as you possibly can. Everything you add is a potential for intrusion. That of course includes any cloud/third party software/service and also the VPN. Don’t accidentally open ways in just for saving yourself three seconds a day. You use a password manager and consider hardware tokens with the same trade-off so don’t do the opposite in other scenarios.
Write docs and keep them up to date! Yes, you are small, so, you don’t need to have the big documentation, but have some kind of document, be it a Google Doc thing or simply a repo with markdown or plain text where you both document what you do and why. Especially important to do “one time” things or things that you rarely do. It doesn’t have to be some fancy documentation, copy and paste, cues, etc.
A typical scenario of messing up early is to change things and forgetting about something that allows for a way in.
Make it a habit to call for important stuff, rather than email to reduce the likelihood of attacks on that front.
For Azure: Why exactly do you use that as a small company? It feels like either you wanna go for a more complete solution, Heroku style (depending on what you need) or a simpler solution (a vserver somewhere). Don’t just use some cloud service if you don’t have a full time person with dedicated knowledge in the field or it eventually will cause issues, as things progress. Also since you say “We are moving…” what’s the rationale?
Use some form of Single SIgn on, and preferably only use one way to authenticate. You are likely to need to have more than one, so a password manager makes sense, but don’t let that be the reason you make all sorts of logins.
Make sure you have proper off-boarding in place as early as possible. That largely involves keeping a list of who can do what where. Make sure you keep it up to date, so you can properly restrict access. Not because you fear wrong-doing, but because it’s yet another way in for an attacker.
And to re-iterate: Make sure you keep everything as simple as possible. Don’t increase attack surface out of a whim. Try out new things on a separate server/network/cloud project/… Do not be tempted to just try it with anything production, that includes desktops/laptops.
Have a threat model! You are small, it is simple. It’s also psychologically important. There’s so many young/small companies that spend months on some mostly theoretical security issue, in the context of already being breached and one system containing some password to decrypt some information magically not being accessible to the attacker, when their front gate is completely open. It makes you feel good, cause you are working on security and yes, you should eventually get there, but get your priorities straight by analyzing what you have and where attackers get in.
Don’t assume the likeliest way in is someone finding a 0-day in OpenSSH or exploiting the OS’s network stack using ICMP to ruin a small company. But also don’t assume that “nobody will know the IP of this system”. Don’t work by hiding things, don’t practice security by obscurity.
So in short: Keep your attack surface small, have people know what they are doing. Practice security, rather than security theater.
I like the thoughts towards documentation. Now we just have to work more on developing the habit.
As for Azure: Our software is a .Net/SQL Server stack and we were going to use O365 anyways. We’re migrating away from a datacenter to Azure for a couple reasons; our infrastructure was handled by a different org when my company was part of a larger company, we can eventually trim down our infra needs to be betters suited for a crowd.
Simplicity was one of my motivations for lots of this job, but it’s nice to hear it reinforced in a security context.
This article is framed toward “one day we’d like to pass a SOC2 audit”, but is also a good intro to some practices you should be doing, if you aren’t already. For 2FA I personally wouldn’t worry too much about forcing people to use hardware keys until you’ve dealt with other more low-hanging fruit – a TOTP authenticator app on the user’s phone is fine for most use cases and threat models.
This article is great.
We’re going to use 2FA, but I think one of the reasons I was thinking about a hardware key was to have a backup that didn’t depend entirely on 2 personal phones (if I’m on vacation and the other person breaks their phone…. then what?).
The repo owner, John McFarlane, is the primary developer for PanDoc. Having to wrestle with Markdown (and multiple flavors no less) led to him developing CommonMark (which is some small tweaks from Markdown but with a spec!).
Which is all to say this is probably pretty well thought out, but take a look.
The folks behind rr then developed pernosco. I always thought it was neat, but don’t program in languages where i could take advantage of it.
Host here. Before interviewing Ron I would have never thought they would just leave a REPL running on a space craft!
Looking forward to this episode!
The space + homoiconic language combination makes me want an episode with Chuck Moore. That could be an interview for the ages!
Edit: actually I think the history of Forth goes back further than the Wikipedia article states.
In 1965, he moved to New York City to become a free-lance programmer. Working in Fortran, Algol, Jovial, PL/I and various assemblers, he continued to use his interpreter as much as possible, literally carrying around his card deck and recoding it as necessary.
Weird to think of carrying around your programming language in a briefcase!
I have been wondering about how to do a forth episode. Is Chuck Moore still active in any capacity?
Maybe? He has talked with some forth groups recently (https://m.youtube.com/watch?v=dI0soDMg28Q&t=357s).
It looks like there are a few interviews in the past couple years, but he is getting up there in years. I think part of the reason why he wrote color forth to use text color semantically was because his eyes weren’t in great shape and that was the late 90s, early 00s.
Great moment from an interview from a couple years ago:
A lot of the comments upstream are conflating “preventing two similar identifiers with different names” and “resolving two similar identifiers to the same keyword/variable/type,” which is unfortunate. Most of the pro-insensitivity comments boil down to “it keeps me from making mistakes by having two variables/properties/whatever with similar but not identical names and getting confused” (is that really such a problem?) but that benefit can be easily kept without also forcibly coalescing all similar identifiers.
that benefit can be easily kept without also forcibly coalescing all similar identifiers
And? In what situation could you possibly want
foo_bar to be different things?
Not arguing for either side here, but your question made me think of words like
To expand on the point @mqudsi seems to be making and @jibsen’s interesting examples, I think there are just many kinds of identifier. How much
fOO_BAR “look the same” is subjective. It depends upon fonts (how big ‘_’ looks). How much to leverage looking different to denote kind is also subjective. One does not always have an IDE on hand when interpreting code. How often that happens is also subjective. Even with an IDE, redundant visual cues can often lessen confusion - how much so, also subjective. What kinds confuse the most is also subjective.
Type is one important aspect. FORTRAN’s old starts with “[I-N] => integer type” is the original Hungarian notation alluded to earlier. There are many aspects, though, and Nim has maybe more than most (at least packages, modules, const, let, var, func, proc, iterator, generics, template, macros). Is global search/replace change of such a redundant cue a pain? It can be. Is the trade off worth it? Also hard to know.
Nim itself recommends leading capitals for type names. LOUD_CASE for constants is all over the stdlib. Caps used to be like radio operator alphabets. Low-res devices did not select LOUD_CASE randomly. It was viewed by many as CLEAR_CASE. (I’d agree it might be “clarity wasted on constant-hood” whose origins may be tied to the hackiness of the C preprocessor.) Probably before it had good IDE tools, Nim used to put a ‘T’ prefix in front of type names and a ‘P’ in front of pointers.
In the Ada case, it was viewed as not possible to render lowercase on some US DoD devices. So, HW limits took away case sensitivity as a choice. Many I know think of case insensitivity, even that of CP/M|DOS heir Windows file systems, as a throw back to ancient technology. There is usually a just so story to support the choice in the modern age when the “real” reason may simply be backward compatibility to the 1970s.
Various studies try to assess all this subjective stuff in the population. Besides being very sensitive to subpopulations studied, I think most confuse “interactive” with “more static reading” contexts (in both file names and in programming). Most text is read much more than it is written which is at least partly why TAB-completion is popular.
In short, I don’t think the “it all depends”/subjectivity should be very contentious yet it often is in Nim conversations. This is partly what leads me to think of the Nim community as a biased pool of evaluators and why I tried to include a broader community like Lobste.rs. Implementing an “import time” feature with something that spills over to
wh_Ile also seems bad.
There have been times I’ve been in a code base and seen that 1 naming convention was used for 1 set of objects (records from the database or unprocessed data) and a second convention was for another set of data that was conceptually related.
It maybe wasn’t the best code, but it at least worked and made sense.
I wasn’t clear/you misunderstood. I was saying you could pick one and be forbidden from using the other without allowing both and coalescing them. That would prohibit confusing names but force stylistic continuity, enable codebase grepping, etc.
You can forbid underscores in identifiers, which disables
snake_case (I actually suggested that in the linked issue and got a lot of downthumbs). However, it won’t solve the situation of
itemID, and I’m pretty sure distinguishing these is an AI-complete problem.
The linked thread highlights some cases with difficulties wrapping C libraries which have unavoidable collisions when following upstream naming.
By forever maintaining two implementations of the compiler - one in C, one in Zig. This way you will always be able to bootstrap from source in three steps:
.ccode. Use system C compiler to build from this
.ccode. We call this stage2.
I’m curious, is there some reason you don’t instead write a backend for the Zig implementation of the compiler to output C code? That seems like it would be easier than maintaining an entirely separate compiler. What am I missing?
The above post says they wanted two separate compilers, one written in C and one in Zig. I’m wondering why they just have one compiler written in Zig that can also output C code as a target. Have it compile itself to C, zip up the C code, and now you have a bootstrap compiler that can build on any system with a C compiler.
In the above linked Zig Roadmap video, Andrew explains that their current plan is halfway between what you are saying and what was said above. They plan to have the Zig compiler output ‘ugly’ C, then they will manually clean up those C files and version control them, and as they add new features to the Zig source, they will port those features to the C codebase.
I just watched this talk and learned a bit more. It does seem like the plan is to use the C backend to compile the Zig compiler to C. What interests me though is there will be a manual cleanup process and then two separate codebases will be maintained. I’m curious why an auto-generated C compiler wouldn’t be good enough for bootstrapping without manual cleanup.
Generated source code usually isn’t considered to be acceptable from an auditing/chain of trust point of view. Don’t expect the C code generated by the Zig compiler’s C backend to be normal readable C, expect something closer to minified js in style but without the minification aspect. Downloading a tarball of such generated C source should be considered equivalent to downloading an opaque binary to start the bootstrapping process.
Being able to trust a compiler toolchain is extremely important from a security perspective, and the Zig project believes that this extra work is worth it.
It would work fine, but it wouldn’t be legitimate as a bootstrappable build because the build would rely on a big auto-generated artifact. An auto-generated artifact isn’t source code. The question is: what do you need to build Zig, other than source code?
The issue is not to be completely free of all bootstrap seeds. The issue is to avoid making new ones. C is the most widely accepted and practical bootstrap target. What do you think is a better alternative?
C isn’t necessarily a bad choice today, but I think it needs to be explicitly acknowledged in this kind of discussion. C isn’t better at being bootstrapped than Zig, many just happen to have chosen it in their seed.
A C compiler written in Zig or Rust to allow bootstrapping old code without encouraging new C code to be written could be a great project, for example.
This is in fact being worked on: https://github.com/Vexu/arocc
Or do like Golang. For bootstrap you need to:
I certainly hope that’s true, but in reality wasm has existed for 5 years and C has existed for 50.
The issue is building from maintained source code with a widely accepted bootstrapping base, like a C compiler.
The Zig plan is to compile the compiler to C using its own C backend, once, and then refactor that output into something to maintain as source code. This compiler would only need to have the C backend.
I mean, if it is, then it should have the time to grow some much needed features.
It’s okay if you don’t know because it’s not your language, but is this how Go works? I know there’s some kind of C bootstrap involved.
The Go compiler used to be written in C. Around 1.4 they switched to a Go compiler written in Go. If you were setting up an entirely new platform (and not use cross compiling), i believe the recommended steps are still get a C compiler working, build Go 1.4, then update from 1.4 to latest.
Bootstrapping a C compiler is usually much easier than bootstrapping a chain of some-other-language compilers.
Only if you accept a c compiler in your bootstrap seed and don’t accept a some-other-language compiler in your seed.
Theoretically. But from a practical point of view? Yes, there are systems like Redox (Rust), but in most cases the C compiler is an inevitable piece of the puzzle (the bootstrapping chain) when building an operating system. And in such cases, I would (when focused on simplicity) rather prefer a language that depends just on C (that I already have) instead of a sequence of previous versions of its own compilers. (and I say that as someone, who does most of his work in Java – which is terrible from the bootstrapping point-of-view)
However, I do not object much against the dependence on previous versions of your compiler. It is often the way to go, because you want to write your compiler in a higher language instead of some old-school C and because you create a language and you believe in its qualities, you use it also for writing the compiler. What I do not understand is why someone (not this particular case, I saw this pattern before many times) present the “self-hosted” as an advantage…
The self-hosted Zig compiler provides much faster compile times and is easier to hack, allowing language development to move forward. In theory the gains could be done in a different language, but some of the kind of optimizations used are exactly the kind of thing Zig is good at. See this talk for some examples: https://media.handmade-seattle.com/practical-data-oriented-design/.
I’m guessing from google translate, that it is 作. You can listen to google say it there:
I think it is this character 做 that means “do, make, produce” where 作 is more on “rise, grow”. Both characters are pronounced the same (same tone).
It’s even better: he quit his CTO job to go back to being an individual contributor.
He talks about it on the Stack Overflow podcast. The important part was that he helped create a culture that would let that happen.
Yeah, imagine giving up all that power because you like hacking better. I don’t know anybody else who is ready to let go once they had a taste.
A job at its best should be an arrangement where both parties get something they want out of it. For some people (like myself), that may not be status, but some other attribute.
I know several people who made similar moves (from VP or C-level to IC or line management), but at smaller companies.
If that actually does bother anyone, can’t they just use a different email client? Wouldn’t it be better to keep Thunderbird working with its existing UI, for those of us that have gotten used to it over the past 20 years?
I think Thunderbird has lost it’s priorities.
No one actually cares how modern an email client looks. No one is wearing their email client on their head as a fashion statement. No one is reading their email in closet because they can’t stand the idea that someone would peak over their shoulder and see them using UI from 2004.
Thunderbird needs some reorganization and usability updates but modernizing the design is not one of them.
Having worked at plenty of software companies I often find rewrites are often just large feature requests cobbled together under the guise of rewriting the base of the application as the only way to achieve the cluster of new features. Is the old software really that bad or has it grown in a disorganized way and/or do you just not understand it?
I dunno, I won’t use a mail app that looks weird and old. I’d consider using Thunderbird if it looked good and worked better than Mail.app.
real talk: Thunderbird looks better than it did a couple years back! I booted into it for the first time in 4 or so years and was like “oh this is pretty good”
Granted I’m in KDE and before I was using it in Mac. But I feel like it’s pretty good for “email power usage”.
There’s a legit complaint about how the panes are laid out by default, but I think that you can get pretty far by just moving around existing components.
UI and UI conventions for email have been pretty continuously evolving since we first got GUI mail clients. And that’s without considering that UI conventions in general evolve over time; an application that doesn’t match the UI conventions of the surrounding system is likely to see itself replaced by one that does.
Which is why keeping the same UI for decades on end is not something that usually works in mass-market software.
I can confidently say I’ve never stopped using a useful piece of software because they hadn’t updated their UI to keep up with whatever fashion trend is currently hip. On the other hand, I have (repeatedly) stopped using software after a UI overhaul screwed up my existing comfort with the software, opening the door for me to look around and see what other software, while equally alien, might have better functionality.
Messing with the look and feel of your application is an exercise in wagering your existing users in pursuit of new ones. In commercial projects, that can often make sense. In FLOSS, it rarely does, as your existing users, particularly those that will be the most annoyed by changes to their workflows, are also the most likely to contribute to the project, either through thorough bug reports or pull requests.
I think it is important to consider the possibility that you are not representative of the mass-market software user. Or, more bluntly: if you were representative of the mass-market software user, the market would behave very differently than what we observe.
I dunno, I don’t think Thunderbird users in general are representative of the mass-market software user. Most of those are just using Gmail or Outlook.com. Desktop MUA users are few and far between these days and use them specifically because they have specialized mail management requirements.
I don’t see where APhilisophicalCat claims to be “representative of the mass-market software user”. I rather would interpret their “In commercial projects, that can often make sense. In FLOSS, it rarely does” as disagreeing with you on whether Thunderbird is “mass-market software”. (I don’t use Thunderbird and I claim no stake in this.)
I think the people building Thunderbird think of it as, or at least aspire to it being, a mass-market application.
(standard disclaimer: I am a former employee of the Mozilla Corporation; I have not worked there since the mid-2010s; when I did work there the stuff I worked on wasn’t related to Thunderbird; I am expressing my own personal opinion based on reading the public communications of the Thunderbird team rather than any sort of lingering insider knowledge or perspective)
I don’t think things evolved, I think we just have a bunch of ways to stylize a list and where buttons to other lists might go. There are trends but at the end of the day you’re reading a list of things.
The point I was trying to make is that sometimes rewrites are just shifting complexity and that you can satisfy both crowds by working on an old tech stack. Not that there isn’t a market for whatever-UI-trend email apps.
I remember when Gmail first came out, and introduced a broad audience to the switch from email clients that present things in terms of individual messages, to clients that present things in terms of conversations.
From an underlying technical perspective this may feel like nothing – just two ways of styling lists, why does it matter? – but from a user interface and experience perspective it was a gigantic shift. It’s rare these days to see an email client or interface that still clings to the pre-Gmail message-based UI conventions.
The same is true for other UI conventions like labeling/tagging, “snooze” functions, etc. etc. Sure, in some sense your inbox is still a list or at most a list of trees, but there are many different ways to present such a thing, and which way you choose to present does in fact matter to end users.
Exactly and there isn’t one crowd; you should aim to appease both. Even if Gmail started a new trend.
I think a lot of people don’t like using “ugly” software. Definitely matters more to nontechies than it does to techies, I think.
even techies will look elsewhere if the app gives you the vibe of something that seems to be from the windows 2000 era and thus probably as other problems too (let’s say scaling)
But current thunderbird on the desktop looks fine ?!
There’s a certain group (small?) that likes these old interfaces though. Enough that things like SerenityOS are quite popular. Or maybe it’s just me.
I can appreciate a very technical but functional UI where you can find everything you need, but it doesn’t look that fresh. And then there is also the “jankynes” factor, like Handbrake, which looks very janky, but exposes all ffmpeg configuration flags as a UI. In my experience there is a big divide between applications that just look old but provide a lot of functionality, even looking janky at first - and apps that simply got thrown together in a short time and weren’t updated to keep up with modern requirements. One example for the latter is looking at f-droid applications, where a very old looking app can be a good indicator that it has never been updated to modern android permission requirements. Or that your desktop application just doesn’t support scaling and is either tiny on high-DPI or blurry - god forbid you move it between different DPI displays.
Yup, that’s why Sylpheed/Claws were considered examples of a great UI for desktop email: https://www.claws-mail.org/screenshots.php
Techies are just as, if not more, aesthetically conscious when it comes to software than non-techies. They just have different aesthetic preferences.
I agree with the above. I’d much prefer if this announcement was about fundamental things related to the internals of thunderbird, not the chrome.
I say this as a long-time thunderbird user, who loves the app and hopes to continue using it (my thunderbird profile is 11 years old). Don’t fix what’s not broken, but do fix what is.
Name one other usable, open-source, mail client that runs on Windows.
Looking old covers a lot of things. There are a bunch of things in the Thunderbird that are annoying and probably hard to fix without some significant redesign. Off the top of my head:
It’s also not clear to me how well the HTML email renderer view is sandboxed. Apple’s Mail.app opens emails in separate sandboxed renderer processes, I’m not sure the extent to which Thunderbird can take advantage of this underlying functionality from Firefox because it’s designed to isolate tabs and all of Thunderbird’s data lives in one tab.
Sylpheed? But now we’re talking about super-esoteric software. I can’t imagine the user that uses sylpheed and thinks “Thunderbird isn’t usable enough”.
familiarity is a usability feature. And I’m sure most UX people are aware of this on some level, but it’s rare to see it articulated