Long time kinesis advantage user here (the first version - still working). It’s a bulky keyboard, but the tenting and palm rests (responsible for most of the bulkiness) are not small details.
I still think the ergodox is a better design compared to a traditional one, but you’ll need adequate palm rests. Tenting is useless without.
Lack of labels, UK keyboard layouts, and “context shifting” are non-issues IMHO. You get used to that, and quite quickly. The real problem is that when you have a clear preference (say, the “dactyl”), then you’d love to have the same keyboard everywhere. I can feel the discomfort when using a traditional staggered layout.
The “not enough keys” is something I can feel. Layering is not a complete substitute, and it’s not as efficient. A good thing about the advantage compared to the dactyl is the presence of the Fn and number row. Even if you don’t use those, they can be used as layer switches without compromising other functions. It also makes switching to regular layouts easier.
Split keyboards take more space than a regular keyboard irregardless. IMHO, it’s worthless to “save on a row of keys”. Have all the keys. Switching layers to type a single character in an alternate map is more expensive than moving a little bit further without having to press any modifier. This is my biggest complaint against the otherwise fantastic manuform layout.
I’ve loved watching the explosion in custom ergo keyboards over the years - especially ones like Ergodox that evolved from a hacky DIY to be a real product you can buy pre-assembled with excellent software. In the home grown vein I also think the Dactyl/Manuform designs are really cool, looks like someone will build one for you here: https://ohkeycaps.com/products/built-to-order-dactyl-manuform-keyboard
Still, nothing I’ve tried has managed to unseat my Kinesis Advantage (cherry blues, 2014) - which is ugly, has some flakey issues (fixed by the Advantage2), but is nearly perfect for typing speed and comfort for me. I recently got an Atreus as a travel-ergo board, but i enjoy the advantage enough that I’d rather spend the half suitcase lugging it around when I travel for work rather than use the much more portable Atreus.
I am very excited for the forthcoming Kinesis Advantage 360, which will combine a split design, customizable tenting, and the cupped/contoured layout from the Advantage/Advantage2, and hopefully will be more portable.
Oooooooh thanks for this. Will have to keep an eye out. I use a Moonlander now, the Kinesis Freestyle Pro with the tenting kit might get me over some of my minor gripes, but the Advantage 360 might actually make it worth switching when it lands.
Also an advantage user here (first version as well). I just checked out the “upcoming” 360. Not too psyched from the few renders available.
The Fn keys on the 1st edition are ridiculously bad (with ESC being my biggest complain - something allegedly improved in the v2), but are completely gone here. Not a fan.
As a user of other custom split keyboards (ergodox, manuform, etc), I don’t see the point in making it “smaller”. It never will. Split keyboards take a ton of space. Tented keyboards (or contoured, in this case) require a pretty hefty palm rest, which is easily as large as the keyboard section. If you use a fully split keyboard “naturally” widening the two sections will make using the mouse even more awkward.
I added a spacer in the bottom part of my advantage to increase the tilt backwards. Ironically, such a mod works effectively with the old case, as it rocks about the middle of the keyboard, so I’m not too displeased. I can’t say from the renders if the 360 allows tilting out of the box, but the flat base might work against this.
I like the idea about the holder “bar” in the middle. Slipping halves is a problem I have on other keyboards, and holding the two sections together does improve this.. but then again, I always felt the advantage separation was always “good enough” soo…
Historically the biggest problem with the Advantage is it was only offered with some pretty weak selection of switches; do the newer models fix that problem too?
Their current boards only come in Cherry MZ Brown, Red, or quiet Red.
I kind of doubt they’ll ever offer enthusiast-level switches or customization - very unlikely they’ll do hot swap, for example. They’ve already said the new board might be ditching function keys to bring the cost down. I’m okay with that - since it also decreases size - but a lot of longtime fans hoping to see even higher end options seem dissatisfied on Twitter.
After the MS 4K died this year due to a coffee accident, I got a Kinesis Edge RGB - it’s been very nice. The Advantage 360 is on my list of “things that might be worth it”.
really have no interest in hacking my own keyboard, soldering, etc. Just gimme the thing, I have coding to do!
really have no interest in hacking my own keyboard, soldering,
etc. Just
gimme the thing, I have coding to do!
Well, if you build a custom keyboard you also get to code it! It’s
a great excuse to delay any real work :D
Jokes aside, fiddling around with your own keyboard firmware is a
lot of fun and can be quite rewarding. Aside from making a very
feature rich layout with macros (both pre-recorded and dynamic),
dual purpose keys, multiple layers et c it’s also nice to know
that your keyboard can grow with your needs.
I’m typing this on a very nice but standard keyboard (iKBC MF87),
and I miss my custom firmware (I prefer QMK) every time I use it.
I just can’t decide to add a media play/pause button or move my
control key to caps lock whenever I like. Sure, I can do that in
software on the OS side, but then I’d have to replicate it to all
OS I use which just…sucks.
it’s also nice to know that your keyboard can grow with your needs.
I’ve been programming for close to 25 years now (yikes!). Keyboards need to be ergonomic, reliable, usable, and replaceable. For customization, I use emacs. Replicating it OS-side is copying down the .emacs and spending thirty minutes on initial load of the software.
I don’t want to play embedded dev; I don’t want to solder anything.
I’d be curious about a chorded keyboard, but I have zero desire to do anything but treat it as a black box with a warranty; extensions should go through software.
John Kemeny’s 1972 book “Man and the Computer” touched on this a little bit. It’s a brief history of computers up to 1972, and then some predictions for the future. Most of the predictions were pretty good, like a world wide network and online encyclopedias.
But… he also predicted that eventually everybody would know a bit of programming, which they’d use for automating simple tasks, balancing budgets, keeping inventory at home, etc.
I was born in the 80ies and some form of CS was part of the public education here starting from the age of ~10 (where a math teacher would usually come in and “teach” logo and/or pascal reading from a book…). It was.. not great, but it was indeed recognized. CS was taught to all schools up to university - where I obviously became biased.
I’m meeting with fellows from a multitude of disciplines (biologists, neurologists, statisticians..), they all had CS as part of their curriculum. Nothing terribly in-depth in some cases, but I would absolutely consider this adequate for what you wrote above. So we’re not doing anything terribly wrong on the education front. And, just like math, what you were taught is not indicative of what you actually retain.
IMHO the real issue is in the 80/90ies any home computer or calculator I got came with a programming manual and full freedom to tinker. Today I’m paying premium on expensive phones I don’t want just to get an unlockable FW.
I find the idea intriguing but I’m wondering if the cost (disabling adblocking at DNS level for people like me, having your browser registering lot of clicks, having your browser tracked a lot more) is worth the benefit (chaoticize ads databases).
Maybe it’s more a plugin that should be installed on the computers of those that don’t want adblocking because they may miss stuff.
I also find it intriguing, but it’s not something I’d do due to the traffic/latency benefits of running with an AD blocker.
I wonder however if being identified as a bot due to this behavior could actually be beneficial. On one hand, this would probably lead to your clicks being ignored (defeating the purpose), but on the other hand you might enter a completely separate tracking workflow and/or being ignored after being marked as “fraudulent activity”.
An interesting dynamic I’ve observed as an embedded software engineer is that all developers are expected to understand the parts of the stack above them. So, the hardware designer is expected to also know firmware and software development. The firmware developer is expected to know software development. But don’t ever expect a software developer to know what a translation lookaside buffer is or how to reflash the board. In addition to that, if the bottom of the stack (e.g. the hardware) is broken, it’s impossible to do solid work on top of it. This is why talent and skill migrates towards the bottom of the stack, because that needs to be done right for everything else to even have a chance of working.
In my own experience firmware/hardware/ee development is very fluid. FW development frequently bleeds into EE/HW development, and the same is true for the other two. It’s almost a necessary thing: the complete machine is one, and wouldn’t work without the sum of these.
But it’s true there’s generally a tipping point above the OS level where knowing the lower-level details has almost zero impact on the stuff you’re writing, especially with modern CPU speeds. Is it bad? IMHO good engineers are always aware of the stack above and below them.
If I’m writing JS, my lower-level stack would be equally vast and complex: browser, protocols and latencies involved, network infrastructure, caching, dns… I’m not convinced embedded dev is “harder” by itself.
The difference is that maybe I’ve seen many more clueless JS devs than FW devs, but latter exists too.
I’m not convinced embedded dev is “harder” by itself.
Agreed. I’d say that it’s mostly a case of embedded being different. And since a lot more people work with and write about higher level code, it’s not surprising that there’s more and better help, documentation and tools available there, which makes it easier to learn that stuff even if the subject matter isn’t inherently easier.
Very nice rebuttals, however to me the main reason why emacs[*] is worth it, is because text-centric interfaces are, in many ways, superior to alternatives we have so far.
Think about it: every piece of the UI is manipulable with the same, uniform interface. Getting better at one task within this system makes knowledge transfer instantly applicable to any other task. This is why in emacs there’s a tendency to include everything into this system, because by doing that it becomes instantly manipulable and uniform. It’s not about the language, or the editor itself, it’s a combination of factors. By making the UI using the same actual medium which is used for data entry (text), we allow interactions which do not need to be foreseen by the developer. It’s homoiconicity at the user-facing level.
The reason I want to use my favorite editor instead of the editor already in the IDE is that I want to use all my uniform manipulation knowledge, always, everywhere.
Likewise, this is why in many cases the CLI feels more powerful than a GUI, not just at a superficial level. The bare-bone I/O system, even when primitive (such as *nix style plumbing) is much more amenable to manipulation and transformations by the operator.
I haven’t seen anything that allows the same degree of flexibility in any other GUI system (which doesn’t just degrade in “you have scripting”) so far. Can we do better? I honestly cannot imagine we can do that until our input system is text. I feel like we need a breakthrough in the physical layer to unlock something new here. Maybe VR can do that.
[*] I’m a 20+ years emacs and vi user, btw. I juggle through both. Everything I said applies to both.
They aren’t wrong, running your own email server on the internet is up there with rolling your own crypto in the list of technical things you should never do.
I’ve been running my own email server ever since I was on dialup (yes!), more than 20 years now. I do not agree. Smtp is actually a resilient protocol which doesn’t require 100% uptime, just sensible configuration which is not that hard to check and very low maintenance once setup. There are many excellent smtp servers nowadays which do this almost automatically.
But I’ve seen this statement repeated over and over and over.
The main issue with smtp for me has been the complete arrogance of the biggest players in the field like gmail, hotmail, etc. The dwindling number of independent smtp servers has been shrinking, making them even more powerful every year. Yet, they are the number #1 source of spam for my host currently (open proxies and relays are incredibly rare for me nowdays). Spam from their network gets a green pass and checks all the boxes: however they provide only black-hole-style spam reporting on their end while blocking almost everyone else with automated checks with zero intervention.
I hate to see that email is being centralized further and further. It’s actually decreasing the quality of the service for everybody else.
@ $WORK, we max at about 5k emails a day. Our daily average is about 1,500 emails a day, giving us about 100k/emails a month. We aren’t huge by any means, but we would spend ~ $100/month with any professional email sending service(sendgrid, mailgun, etc). We instead do our own email. We send things like purchase orders, payroll info, timesheets and other backoffice/accounting stuff via email, we are not spam by any means. We have a special domain just for this stuff, outside of the organizational domain, we own address space direct from ARIN, etc.
The big players when getting annoyed at you usually at least send along a URL saying oh hey, we are mad at you, go here for details. It’s usually a few clicks and a little babysitting and life is good again, until they get annoyed again a month or 6 down the line.
It’s all the small players that are annoying, you get nothing just a SMTP 550, their postmaster@ isn’t working or is ignored, so you are stuck trying to track down a human to fix their blasted email server.
That said, I 100% agree with you, if one of the big people get REALLY annoyed at you, there isn’t anything you can do. Most of our email is internal to our organization(s) we do payroll and what not for, so we have exceptions with their mail provider, to mostly bypass the insanity for them, which helps a lot.
IF you are going to run your own mail server, PLEASE turn on postmaster@ and have it go somewhere you actually READ. There are a bunch of other RFC’d email addresses you should also turn on (webmaster@, security@, etc)
I include an implementation of strlcpy if that’s missing on the target, it’s not a complex function to implement if you cannot include a third-party implementation for some reason.
If you can replace strcpy with memcpy, then it’s true you should have been using memcpy in the first place. However you cannot always replace strcpy with memcpy with the same efficiency, and strlcpy has the correct semantics.
Totally agree! As I read this post, I remembered this other post on the same topic.
strcpy, like gets, is fundamentally unsafe and there’s no way to use it safely unless the source buffer is known at compile-time. I know multiple people who give out the advice to use strncpy instead of strcpy, but I’m not a believer. Using strncpy requires that you know the length of the destination buffer, and if you know that, then you could be using memcpy instead.
This is basically how strcpy is implemented in glibc, with the length check added. This is what unaware people believe strncpy does.
This still isn’t totally foolproof – if src is not a valid string, or either pointer is NULL, that’s undefined behaviour. Also, technically all identifiers should be unique in their first 6 characters, and all identifiers beginning with str are reserved anyway, but that’s C programming for you.
I honestly don’t know what the point of strncpy is. I understand the urge to strcpy to copy a short string into a large buffer; it only copies as many bytes as necessary. But strncpy does not do this – it copies the string into the buffer, and then it fills the buffer with null bytes until it has written as many bytes as you told it to. Basically, it’s worse than memcpy in every way unless this particular weird behaviour is what you really want. To call it “niche” is not only fair, it’s kind.
2 years ago, I submitted to a C library a pull request which changed a strncpy to a memcpy when gcc started issuing warnings about bad uses of strncpy. I kinda wish gcc would issue a warning for any use of the str*cpy functions, possibly with a link to some helpful advice on what to do instead.
Using strncpy requires that you know the length of the destination buffer, and if you know that, then you could be using memcpy instead
The length of the destination buffer is the maximum number of characters you can copy. The length of the source string is the maximum number that you want to copy. In any cases where the former is smaller than the latter, you want to detect an error.
The strlcpy function is good for this case. It doesn’t require you to scan the source string twice (once to find the null terminator, once to do the copy) and it lets you specify the maximum size. It always returns a null-terminated buffer (unlike strncpy, which should never be used because if the destination is not long enough then it doesn’t null terminate and so is spectacularly dangerous).
There are three cases:
You know the length of the source and the size of the destination. Use memcpy.
You know the size of the destination. Use strlcpy, check for error (or don’t if you don’t care about truncation - the result is deterministic and if you’ve asked for a string up to a certain size then strlcpy may enforce this for you).
You don’t want to think about the size of the destination. Use strdup and let it allocate a buffer that’s big enough for your string.
99% of cases I’ve used, strdup is the right thing to do. Don’t worry about the string length, just let libc handle allocating a buffer for it. For most of the rest, strlcpy is the right solution. If memcpy looks like the right thing, you’re probably dealing with some abstraction over C strings, rather than raw C strings. If you’re willing to do that, use C++’s std::string, let it worry about all of this for you, and spend your time on your application logic and not on tedious bits of C memory management.
strlcpy is better, and if truncation to the length of your dest buffer is what you want, then it’s the best solution. More commonly, I want to reallocate a larger buffer and try again, but you’re correct that strdup is a much simpler way to get that result most of the time.
I decided to look up the Linux implementation of strlcpy, and it works the same way as my function above: a strlen and then a memcpy. So it does still traverse the array twice, but I don’t see why that’s a problem.
I decided to look up the Linux implementation of strlcpy, and it works the same way as my function above: a strlen and then a memcpy. So it does still traverse the array twice, but I don’t see why that’s a problem.
I found that a bit surprising, but that’s the in-kernel version so who knows what the constraints were. The FreeBSD version (which was taken from OpenBSD, which is where the function originated) doesn’t. The problem with traversing the string twice is threefold:
If the string is large, the first traversal will evict parts of beginning from L1 cache so you’ll hit L1 misses on both traversals.
You are far more likely to want to use the destination soon than the source, but the fact that you’ve read it twice in quick succession will hint the caches that you’re likely to use the source again and they’ll prioritise evicting things that you don’t want.
[Far less important on modern CPUs]: You’re running a load more instructions because you have all of the loop logic twice.
The disadvantage of this is that it’s far less amenable to vectorisation than the strlen + memcpy version. Without running benchmarks, I don’t know which is going to be slower. The cache effects won’t show up in microbenchmarks so I’d need to find a program that used strlcpy on a hot path for it to matter.
You raise some compelling points! And compiler optimizations will throw another wrench in there. Without doing rigorous benchmarking, this is all speculation, but it’s interesting speculation.
I honestly don’t know what the point of strncpy is. I understand the urge to strcpy to copy a short string into a large buffer; it only copies as many bytes as necessary. But strncpy does not do this – it copies the string into the buffer, and then it fills the buffer with null bytes until it has written as many bytes as you told it to. Basically, it’s worse than memcpy in every way unless this particular weird behaviour is what you really want. To call it “niche” is not only fair, it’s kind.
strncpy was intended for fixed-length character fields such as utmp; it wasn’t designed for null-terminated strings. It’s error prone so I replace strnc(at|py) with strlc(at|py) or memmove.
strcpy, like gets, is fundamentally unsafe and there’s no way to use it safely unless the source buffer is known at compile-time.
Huh? The danger of gets is completely different from that of strcpy (and the former is certainly worse) – gets does I/O, taking in arbitrary, almost-certainly unknown input data; strcpy operates entirely on data already within your program’s address space and (hopefully) already known to be a valid, NUL-terminated string of a known length. Yes, it is very possible (easy, even) to screw that up and end up with arbitrary badness, but it’s a lot easier to get right than ensuring that whatever bytes gets pulls in are going to contain a linefeed within the expected number of bytes (the only way I can think of offhand for using gets safely would involve a dup-ing a pipe or socketpair or something you created yourself to your own stdin and writing known data into it).
(This is not to say that strcpy is great, nor to negate the point of the article that it can and quite arguably should be replaced by memcpy in most cases. But it’s not as grossly broken as gets.)
strcpy is suitable for some situations – copying static strings, or strings that are otherwise of a known length. In the latter case, memcpy is better. In the former case, I actually think strcpy is fine, even though this article argues against it. I would expect a modern compiler to optimize those copies to memcpy anyway.
gets is basically totally unusable in all situations. Somebody doing something bizarre like you mentioned should probably rethink their approach…
Happy to see that kernel developers are staunchly defending their email-based workflow, and there is no real threat that GitHub will become the core development infrastructure.
It’s strange to me that the article frames GitHub as an alternative to email that is better suited to “one-off” contributions. One of the problems with GitHub development is that you can’t contribute without a GitHub account, whereas email based workflows generally don’t require any subscription or account with the project to contribute. In that way email is actually better for one-off contributions.
If new contributors have difficulty getting “set up” to submit patches in the right format, maybe it would be good to have a bot that monitors a newbie contributors mailing list, identifying problems and suggesting fixes for emails that don’t meet the project standards. Keeping it email-based seems like a much more realistic way to help newbies learn about the kernel development process, rather than giving them the false impression that Linux is a GitHub project.
Show of hands: who doesn’t have a GitHub account for reasons other than ideological? Even if I refused to use GitHub for my own projects, I’d end up with one purely for contributing to projects. (And it’s still a better experience than email.) It’s not like making an account is a huge barrier either.
That, and it’s not like GitHub is doing anything dastardly with your account either.
I do have a GitHub account, but I use it to open issues.
My problem with these changes is not that GitHub is good or bad, but rather an external company becomes a dependency.
If gcc does something bad, no worries, we can fork it, but if GitHub does something bad, we cannot.
If it matters that much, I would suggest to run a GitLab/gitea server.
EDIT: P.S., I have friends who don’t have GitHub accounts, not beacuse of ideology, but beacuse GitHub does not allow them, they live in places like Iran. Also: sometimes Mother Russia blocks websites like GitHub :)
I’ve been waiting for this supposed Microsoft intervention and haven’t seen it. Everything GH is doing now is what they’d have done when they were independent, good or bad.
doing anything dastardly with your account either.
Idc about the quality of the service GitHub provides. You said they aren’t doing anything bad with your account (which to me means personal data)
This is right up Microsoft’s (or any big tech company’s) alley. You can claim they aren’t doing anything, but because it isn’t open source you are at their mercy.
I can’t tell what you’re arguing. That GitHub actually is better than email for one-off contributions because it’s a better experience? Your other points seem to rest on that premise, which is clearly not agreed upon.
It’s strange to me that the article frames GitHub as an alternative to email that is better suited to “one-off” contributions. One of the problems with GitHub development is that you can’t contribute without a GitHub account, whereas email based workflows generally don’t require any subscription or account with the project to contribute. In that way email is actually better for one-off contributions.
Exactly. This is why the Sane software manifesto requires that it should be possible to contribute without having an account at particular service and without having signed a contract with particular third-party.
Dependency on centralized corporations like GitHub, Google etc. is Evil. Free software and internet needs rather more decentralization.
I definitely also prefer mailing-list development. It makes much more sense when discussing changes iteratively, and the larger the change the more useful it becomes.
Technically you still need to subscribe to the mailing list, unless you’re one of the fellows that still knows about gmane. And it does require some decent email client, to both handle the traffic and to make patch submission/retrieval convenient. Since many devs nowadays just use web-based clients, they see mailing-list based development as an annoyance, and that’s why it gets such a bad rep. This is not exclusive to kernel development. @calvin is also right in saying that almost everyone now has a github account for either work of issue submission, you actually have less friction using github than to subscribe to a high-traffic mailing list.
It’s also true IMHO that the simplicity of github makes it easy to submit one-off patches, which is both a pro and a con. That’s fine for a typo fix, but it’s also too easy to see PRs with new functionalities, or bug fixes but with terrible code quality which the author has no intention to support beyond the initial submission: these are just a PITA for large projects, and raising the barrier to contribution does help weed them off.
Kernel dev is not supposed to be easy, so having a non-zero barrier to entry is beneficial IMHO.
zsh’s “zmv” combined with noglob (alias mmv="noglob zmv -W") allows you to do basic renames with glob syntax: mmv *.txt *.csv. More generally zmv can be used to perform copies as well. See the documentation for some more examples.
This doesn’t beat vidir or rename, or more complex scripts, but it definitely has a very practical syntax which works for 90% of the time I need some quick mass renames. I still use rename, vidir or custom scripts.
Not to cast too much shade, but one of the biggest things (imho, having written both webshit and CAD software) is that the math and CS work for CAD software is genuinely hard. Like, really hard.
Instead of training our devs to do that sort of really hard and mathy stuff, we’ve raised a generation of engineers to give conference talks on amateur (not necessarily novice, but amateur) compiler design and PLT wonkery–stuff that doesn’t scale in terms of impact.
Like, it would seem to me that more humans are impacted on the day-to-day by plastic moldings enabled by computational fluid dynamics or CAD-designed tooling than benefit from ReactJS or Babel or some new exploration in type-safety. Maybe we should try to encourage more of the next generation to go after numerical/graphical stuff.
I haven’t done that kind of programming in a long time now (it’s gonna be 10 years now…) but my experience kindda matches yours…
For example, the biggest obstacle towards solving the first problem in the article (everything is single-threaded) is that the subset of people who can a) do parallel programming (in a maintainable manner), b) make heads or tails out of the math behind a CAD program, and c) understand enough about how that program is going to be used in order to come up with real improvements, not “UX improvements”, is extremely small. In what used to be my field (EM field simulation) I’ve met maybe one or two people who could do it. I definitely wasn’t one of them, I knew enough about electromagnetism to be able to understand the equations and write the programs, but definitely not enough to make any serious original contributions in this regard (in my defence, I wasn’t that interested in that particular part of EE either, it was just the only one I could work in at the time).
This is pretty obvious at every level. E.g. orange site is full of critique against CAD/CAE tools originating from programmers, and while I can’t speak for the mechanical side of things (zero expertise there), I can confidently say that on the EE end, most of it is bonkers. Lots of CAD tools are 1988-era programs with 1998-era UIs not only because they’re marketed by huge, non-software companies who are both unable to recognise and unwilling to pay for software development expertise, but also because too many people with programming expertise are too busy being right and too dogmatic (about many things, including how to build graphical software) to make any real improvement.
(You can’t just take a program for a test drive and maybe do a quick PCB design because you’ve taken up electronics as a hobby and, with a grand total of maybe 200 hours of experience spread across an year and a half, hope to figure out what the people who’ve been at it for twenty years at their day job really need. Especially when they’re operating inside an organisation, not in a living room, and with all sorts of reporting, engineering, logistics and business processes in place, some of them formal, some of them informal, all of which you have to support and facilitate. But that’s a whole other story.)
The educational gap is hard to bridge though. Maybe with enough people going into data science these days there’s some hope to it. Back in my second year of uni, when I took the introductory numerical methods course, lots of people – both students and teachers – were kindda sneering at the more programming-heavy problems in that course, after decades of Moore’s law making everything program twice as fast if you just wait a few months. (Why bother with C when Matlab does all this in a few lines etc. etc..)
I agree. I don’t think the OP understands how hard the problem space is, even ignoring half of the issues presented.
I’d argue you have better chances at working in the field if you start from a math curriculum and add the required CS bits (which are also of the non-trivial kind). I say this as also having worked in the field of CG (mostly doing topographical DBs, graphs and routing), where even seemingly trivial algorithms that can be described in half a page in “C computer graphics” can require tons of foresight to actually work without producing degenerate cases due to numerical instability. It’s fascinating, but incredibly demanding.
The proof is that, on a global level, we have very few geometrical kernels that can work on B-rep representations. All of them are decades-old and still have plenty of degenerate cases. The good ones are too expensive for hobbyists to use. On top of that, history-based modelers still rely on a ton of heuristics to rebuild the models to fill the gaps in topological changes.
I’ve been using magit extensively, however I cannot stomach its blame mode. I find it next to useless due to the way it annonates inline. I really need something with side annotation. Can anybody suggest an alternative besides vc-annotate? I’m using “tig blame” heavily for this task.
Long time kinesis advantage user here (the first version - still working). It’s a bulky keyboard, but the tenting and palm rests (responsible for most of the bulkiness) are not small details.
I still think the ergodox is a better design compared to a traditional one, but you’ll need adequate palm rests. Tenting is useless without.
Lack of labels, UK keyboard layouts, and “context shifting” are non-issues IMHO. You get used to that, and quite quickly. The real problem is that when you have a clear preference (say, the “dactyl”), then you’d love to have the same keyboard everywhere. I can feel the discomfort when using a traditional staggered layout.
The “not enough keys” is something I can feel. Layering is not a complete substitute, and it’s not as efficient. A good thing about the advantage compared to the dactyl is the presence of the Fn and number row. Even if you don’t use those, they can be used as layer switches without compromising other functions. It also makes switching to regular layouts easier.
Split keyboards take more space than a regular keyboard irregardless. IMHO, it’s worthless to “save on a row of keys”. Have all the keys. Switching layers to type a single character in an alternate map is more expensive than moving a little bit further without having to press any modifier. This is my biggest complaint against the otherwise fantastic manuform layout.
I’ve loved watching the explosion in custom ergo keyboards over the years - especially ones like Ergodox that evolved from a hacky DIY to be a real product you can buy pre-assembled with excellent software. In the home grown vein I also think the Dactyl/Manuform designs are really cool, looks like someone will build one for you here: https://ohkeycaps.com/products/built-to-order-dactyl-manuform-keyboard
Still, nothing I’ve tried has managed to unseat my Kinesis Advantage (cherry blues, 2014) - which is ugly, has some flakey issues (fixed by the Advantage2), but is nearly perfect for typing speed and comfort for me. I recently got an Atreus as a travel-ergo board, but i enjoy the advantage enough that I’d rather spend the half suitcase lugging it around when I travel for work rather than use the much more portable Atreus.
I am very excited for the forthcoming Kinesis Advantage 360, which will combine a split design, customizable tenting, and the cupped/contoured layout from the Advantage/Advantage2, and hopefully will be more portable.
Oooooooh thanks for this. Will have to keep an eye out. I use a Moonlander now, the Kinesis Freestyle Pro with the tenting kit might get me over some of my minor gripes, but the Advantage 360 might actually make it worth switching when it lands.
Also an advantage user here (first version as well). I just checked out the “upcoming” 360. Not too psyched from the few renders available.
The Fn keys on the 1st edition are ridiculously bad (with ESC being my biggest complain - something allegedly improved in the v2), but are completely gone here. Not a fan.
As a user of other custom split keyboards (ergodox, manuform, etc), I don’t see the point in making it “smaller”. It never will. Split keyboards take a ton of space. Tented keyboards (or contoured, in this case) require a pretty hefty palm rest, which is easily as large as the keyboard section. If you use a fully split keyboard “naturally” widening the two sections will make using the mouse even more awkward.
I added a spacer in the bottom part of my advantage to increase the tilt backwards. Ironically, such a mod works effectively with the old case, as it rocks about the middle of the keyboard, so I’m not too displeased. I can’t say from the renders if the 360 allows tilting out of the box, but the flat base might work against this.
I like the idea about the holder “bar” in the middle. Slipping halves is a problem I have on other keyboards, and holding the two sections together does improve this.. but then again, I always felt the advantage separation was always “good enough” soo…
Historically the biggest problem with the Advantage is it was only offered with some pretty weak selection of switches; do the newer models fix that problem too?
Their current boards only come in Cherry MZ Brown, Red, or quiet Red.
I kind of doubt they’ll ever offer enthusiast-level switches or customization - very unlikely they’ll do hot swap, for example. They’ve already said the new board might be ditching function keys to bring the cost down. I’m okay with that - since it also decreases size - but a lot of longtime fans hoping to see even higher end options seem dissatisfied on Twitter.
After the MS 4K died this year due to a coffee accident, I got a Kinesis Edge RGB - it’s been very nice. The Advantage 360 is on my list of “things that might be worth it”.
really have no interest in hacking my own keyboard, soldering, etc. Just gimme the thing, I have coding to do!
Well, if you build a custom keyboard you also get to code it! It’s a great excuse to delay any real work :D
Jokes aside, fiddling around with your own keyboard firmware is a lot of fun and can be quite rewarding. Aside from making a very feature rich layout with macros (both pre-recorded and dynamic), dual purpose keys, multiple layers et c it’s also nice to know that your keyboard can grow with your needs.
I’m typing this on a very nice but standard keyboard (iKBC MF87), and I miss my custom firmware (I prefer QMK) every time I use it. I just can’t decide to add a media play/pause button or move my control key to caps lock whenever I like. Sure, I can do that in software on the OS side, but then I’d have to replicate it to all OS I use which just…sucks.
I’ve been programming for close to 25 years now (yikes!). Keyboards need to be ergonomic, reliable, usable, and replaceable. For customization, I use emacs. Replicating it OS-side is copying down the .emacs and spending thirty minutes on initial load of the software.
I don’t want to play embedded dev; I don’t want to solder anything.
I’d be curious about a chorded keyboard, but I have zero desire to do anything but treat it as a black box with a warranty; extensions should go through software.
I am going to buy a 360, as I have been (until earlier this year, when my Advantage 2 died) a two-decade Kinesis Advantage user.
John Kemeny’s 1972 book “Man and the Computer” touched on this a little bit. It’s a brief history of computers up to 1972, and then some predictions for the future. Most of the predictions were pretty good, like a world wide network and online encyclopedias.
But… he also predicted that eventually everybody would know a bit of programming, which they’d use for automating simple tasks, balancing budgets, keeping inventory at home, etc.
Who uses a computer and doesn’t know a tiny bit of excel?
I was born in the 80ies and some form of CS was part of the public education here starting from the age of ~10 (where a math teacher would usually come in and “teach” logo and/or pascal reading from a book…). It was.. not great, but it was indeed recognized. CS was taught to all schools up to university - where I obviously became biased.
I’m meeting with fellows from a multitude of disciplines (biologists, neurologists, statisticians..), they all had CS as part of their curriculum. Nothing terribly in-depth in some cases, but I would absolutely consider this adequate for what you wrote above. So we’re not doing anything terribly wrong on the education front. And, just like math, what you were taught is not indicative of what you actually retain.
IMHO the real issue is in the 80/90ies any home computer or calculator I got came with a programming manual and full freedom to tinker. Today I’m paying premium on expensive phones I don’t want just to get an unlockable FW.
I find the idea intriguing but I’m wondering if the cost (disabling adblocking at DNS level for people like me, having your browser registering lot of clicks, having your browser tracked a lot more) is worth the benefit (chaoticize ads databases).
Maybe it’s more a plugin that should be installed on the computers of those that don’t want adblocking because they may miss stuff.
I also find it intriguing, but it’s not something I’d do due to the traffic/latency benefits of running with an AD blocker.
I wonder however if being identified as a bot due to this behavior could actually be beneficial. On one hand, this would probably lead to your clicks being ignored (defeating the purpose), but on the other hand you might enter a completely separate tracking workflow and/or being ignored after being marked as “fraudulent activity”.
An interesting dynamic I’ve observed as an embedded software engineer is that all developers are expected to understand the parts of the stack above them. So, the hardware designer is expected to also know firmware and software development. The firmware developer is expected to know software development. But don’t ever expect a software developer to know what a translation lookaside buffer is or how to reflash the board. In addition to that, if the bottom of the stack (e.g. the hardware) is broken, it’s impossible to do solid work on top of it. This is why talent and skill migrates towards the bottom of the stack, because that needs to be done right for everything else to even have a chance of working.
In my own experience firmware/hardware/ee development is very fluid. FW development frequently bleeds into EE/HW development, and the same is true for the other two. It’s almost a necessary thing: the complete machine is one, and wouldn’t work without the sum of these.
But it’s true there’s generally a tipping point above the OS level where knowing the lower-level details has almost zero impact on the stuff you’re writing, especially with modern CPU speeds. Is it bad? IMHO good engineers are always aware of the stack above and below them.
If I’m writing JS, my lower-level stack would be equally vast and complex: browser, protocols and latencies involved, network infrastructure, caching, dns… I’m not convinced embedded dev is “harder” by itself.
The difference is that maybe I’ve seen many more clueless JS devs than FW devs, but latter exists too.
Agreed. I’d say that it’s mostly a case of embedded being different. And since a lot more people work with and write about higher level code, it’s not surprising that there’s more and better help, documentation and tools available there, which makes it easier to learn that stuff even if the subject matter isn’t inherently easier.
Very nice rebuttals, however to me the main reason why emacs[*] is worth it, is because text-centric interfaces are, in many ways, superior to alternatives we have so far.
Think about it: every piece of the UI is manipulable with the same, uniform interface. Getting better at one task within this system makes knowledge transfer instantly applicable to any other task. This is why in emacs there’s a tendency to include everything into this system, because by doing that it becomes instantly manipulable and uniform. It’s not about the language, or the editor itself, it’s a combination of factors. By making the UI using the same actual medium which is used for data entry (text), we allow interactions which do not need to be foreseen by the developer. It’s homoiconicity at the user-facing level.
The reason I want to use my favorite editor instead of the editor already in the IDE is that I want to use all my uniform manipulation knowledge, always, everywhere.
Likewise, this is why in many cases the CLI feels more powerful than a GUI, not just at a superficial level. The bare-bone I/O system, even when primitive (such as *nix style plumbing) is much more amenable to manipulation and transformations by the operator.
I haven’t seen anything that allows the same degree of flexibility in any other GUI system (which doesn’t just degrade in “you have scripting”) so far. Can we do better? I honestly cannot imagine we can do that until our input system is text. I feel like we need a breakthrough in the physical layer to unlock something new here. Maybe VR can do that.
[*] I’m a 20+ years emacs and vi user, btw. I juggle through both. Everything I said applies to both.
I’ve been running my own email server ever since I was on dialup (yes!), more than 20 years now. I do not agree. Smtp is actually a resilient protocol which doesn’t require 100% uptime, just sensible configuration which is not that hard to check and very low maintenance once setup. There are many excellent smtp servers nowadays which do this almost automatically.
But I’ve seen this statement repeated over and over and over.
The main issue with smtp for me has been the complete arrogance of the biggest players in the field like gmail, hotmail, etc. The dwindling number of independent smtp servers has been shrinking, making them even more powerful every year. Yet, they are the number #1 source of spam for my host currently (open proxies and relays are incredibly rare for me nowdays). Spam from their network gets a green pass and checks all the boxes: however they provide only black-hole-style spam reporting on their end while blocking almost everyone else with automated checks with zero intervention.
I hate to see that email is being centralized further and further. It’s actually decreasing the quality of the service for everybody else.
@ $WORK, we max at about 5k emails a day. Our daily average is about 1,500 emails a day, giving us about 100k/emails a month. We aren’t huge by any means, but we would spend ~ $100/month with any professional email sending service(sendgrid, mailgun, etc). We instead do our own email. We send things like purchase orders, payroll info, timesheets and other backoffice/accounting stuff via email, we are not spam by any means. We have a special domain just for this stuff, outside of the organizational domain, we own address space direct from ARIN, etc.
The big players when getting annoyed at you usually at least send along a URL saying oh hey, we are mad at you, go here for details. It’s usually a few clicks and a little babysitting and life is good again, until they get annoyed again a month or 6 down the line.
It’s all the small players that are annoying, you get nothing just a SMTP 550, their postmaster@ isn’t working or is ignored, so you are stuck trying to track down a human to fix their blasted email server.
That said, I 100% agree with you, if one of the big people get REALLY annoyed at you, there isn’t anything you can do. Most of our email is internal to our organization(s) we do payroll and what not for, so we have exceptions with their mail provider, to mostly bypass the insanity for them, which helps a lot.
IF you are going to run your own mail server, PLEASE turn on postmaster@ and have it go somewhere you actually READ. There are a bunch of other RFC’d email addresses you should also turn on (webmaster@, security@, etc)
I feel like I need corroboration to know if this is really good advice or not. 😄
I include an implementation of strlcpy if that’s missing on the target, it’s not a complex function to implement if you cannot include a third-party implementation for some reason.
If you can replace strcpy with memcpy, then it’s true you should have been using memcpy in the first place. However you cannot always replace strcpy with memcpy with the same efficiency, and strlcpy has the correct semantics.
I do agree the *_s variants are pointless.
Totally agree! As I read this post, I remembered this other post on the same topic.
strcpy
, likegets
, is fundamentally unsafe and there’s no way to use it safely unless the source buffer is known at compile-time. I know multiple people who give out the advice to usestrncpy
instead ofstrcpy
, but I’m not a believer. Usingstrncpy
requires that you know the length of the destination buffer, and if you know that, then you could be usingmemcpy
instead.If you want a safer
strcpy
, here you go:This is basically how strcpy is implemented in glibc, with the length check added. This is what unaware people believe
strncpy
does.This still isn’t totally foolproof – if
src
is not a valid string, or either pointer isNULL
, that’s undefined behaviour. Also, technically all identifiers should be unique in their first 6 characters, and all identifiers beginning withstr
are reserved anyway, but that’s C programming for you.I honestly don’t know what the point of
strncpy
is. I understand the urge tostrcpy
to copy a short string into a large buffer; it only copies as many bytes as necessary. Butstrncpy
does not do this – it copies the string into the buffer, and then it fills the buffer with null bytes until it has written as many bytes as you told it to. Basically, it’s worse thanmemcpy
in every way unless this particular weird behaviour is what you really want. To call it “niche” is not only fair, it’s kind.2 years ago, I submitted to a C library a pull request which changed a
strncpy
to amemcpy
when gcc started issuing warnings about bad uses ofstrncpy
. I kinda wish gcc would issue a warning for any use of thestr*cpy
functions, possibly with a link to some helpful advice on what to do instead.The length of the destination buffer is the maximum number of characters you can copy. The length of the source string is the maximum number that you want to copy. In any cases where the former is smaller than the latter, you want to detect an error.
The
strlcpy
function is good for this case. It doesn’t require you to scan the source string twice (once to find the null terminator, once to do the copy) and it lets you specify the maximum size. It always returns a null-terminated buffer (unlikestrncpy
, which should never be used because if the destination is not long enough then it doesn’t null terminate and so is spectacularly dangerous).There are three cases:
memcpy
.strlcpy
, check for error (or don’t if you don’t care about truncation - the result is deterministic and if you’ve asked for a string up to a certain size thenstrlcpy
may enforce this for you).strdup
and let it allocate a buffer that’s big enough for your string.99% of cases I’ve used,
strdup
is the right thing to do. Don’t worry about the string length, just let libc handle allocating a buffer for it. For most of the rest,strlcpy
is the right solution. Ifmemcpy
looks like the right thing, you’re probably dealing with some abstraction over C strings, rather than raw C strings. If you’re willing to do that, use C++’sstd::string
, let it worry about all of this for you, and spend your time on your application logic and not on tedious bits of C memory management.strlcpy
is better, and if truncation to the length of yourdest
buffer is what you want, then it’s the best solution. More commonly, I want to reallocate a larger buffer and try again, but you’re correct thatstrdup
is a much simpler way to get that result most of the time.I decided to look up the Linux implementation of
strlcpy
, and it works the same way as my function above: astrlen
and then amemcpy
. So it does still traverse the array twice, but I don’t see why that’s a problem.I found that a bit surprising, but that’s the in-kernel version so who knows what the constraints were. The FreeBSD version (which was taken from OpenBSD, which is where the function originated) doesn’t. The problem with traversing the string twice is threefold:
The disadvantage of this is that it’s far less amenable to vectorisation than the
strlen
+memcpy
version. Without running benchmarks, I don’t know which is going to be slower. The cache effects won’t show up in microbenchmarks so I’d need to find a program that usedstrlcpy
on a hot path for it to matter.You raise some compelling points! And compiler optimizations will throw another wrench in there. Without doing rigorous benchmarking, this is all speculation, but it’s interesting speculation.
strncpy was intended for fixed-length character fields such as utmp; it wasn’t designed for null-terminated strings. It’s error prone so I replace strnc(at|py) with strlc(at|py) or memmove.
Huh? The danger of
gets
is completely different from that ofstrcpy
(and the former is certainly worse) –gets
does I/O, taking in arbitrary, almost-certainly unknown input data;strcpy
operates entirely on data already within your program’s address space and (hopefully) already known to be a valid, NUL-terminated string of a known length. Yes, it is very possible (easy, even) to screw that up and end up with arbitrary badness, but it’s a lot easier to get right than ensuring that whatever bytesgets
pulls in are going to contain a linefeed within the expected number of bytes (the only way I can think of offhand for usinggets
safely would involve a dup-ing a pipe or socketpair or something you created yourself to your own stdin and writing known data into it).(This is not to say that
strcpy
is great, nor to negate the point of the article that it can and quite arguably should be replaced bymemcpy
in most cases. But it’s not as grossly broken asgets
.)Okay, I might have exaggerated there. :^)
strcpy
is suitable for some situations – copying static strings, or strings that are otherwise of a known length. In the latter case,memcpy
is better. In the former case, I actually thinkstrcpy
is fine, even though this article argues against it. I would expect a modern compiler to optimize those copies tomemcpy
anyway.gets
is basically totally unusable in all situations. Somebody doing something bizarre like you mentioned should probably rethink their approach…Happy to see that kernel developers are staunchly defending their email-based workflow, and there is no real threat that GitHub will become the core development infrastructure.
It’s strange to me that the article frames GitHub as an alternative to email that is better suited to “one-off” contributions. One of the problems with GitHub development is that you can’t contribute without a GitHub account, whereas email based workflows generally don’t require any subscription or account with the project to contribute. In that way email is actually better for one-off contributions.
If new contributors have difficulty getting “set up” to submit patches in the right format, maybe it would be good to have a bot that monitors a newbie contributors mailing list, identifying problems and suggesting fixes for emails that don’t meet the project standards. Keeping it email-based seems like a much more realistic way to help newbies learn about the kernel development process, rather than giving them the false impression that Linux is a GitHub project.
Show of hands: who doesn’t have a GitHub account for reasons other than ideological? Even if I refused to use GitHub for my own projects, I’d end up with one purely for contributing to projects. (And it’s still a better experience than email.) It’s not like making an account is a huge barrier either.
That, and it’s not like GitHub is doing anything dastardly with your account either.
Anyone who Github has banned or who they are legally forbidden from working with as an American company.
I do have a GitHub account, but I use it to open issues.
My problem with these changes is not that GitHub is good or bad, but rather an external company becomes a dependency.
If gcc does something bad, no worries, we can fork it, but if GitHub does something bad, we cannot.
If it matters that much, I would suggest to run a GitLab/gitea server.
EDIT: P.S., I have friends who don’t have GitHub accounts, not beacuse of ideology, but beacuse GitHub does not allow them, they live in places like Iran. Also: sometimes
MotherRussia blocks websites like GitHub :)Of course, Microsoft would never!
I’ve been waiting for this supposed Microsoft intervention and haven’t seen it. Everything GH is doing now is what they’d have done when they were independent, good or bad.
Idc about the quality of the service GitHub provides. You said they aren’t doing anything bad with your account (which to me means personal data)
This is right up Microsoft’s (or any big tech company’s) alley. You can claim they aren’t doing anything, but because it isn’t open source you are at their mercy.
Yeah most of the things that are bad about Github are equally bad regardless of whether or not they are a separate firm or owned by Microsoft.
The dastardly thing that GitHub is doing is GitHub.
I can’t tell what you’re arguing. That GitHub actually is better than email for one-off contributions because it’s a better experience? Your other points seem to rest on that premise, which is clearly not agreed upon.
Exactly. This is why the Sane software manifesto requires that it should be possible to contribute without having an account at particular service and without having signed a contract with particular third-party.
Dependency on centralized corporations like GitHub, Google etc. is Evil. Free software and internet needs rather more decentralization.
I definitely also prefer mailing-list development. It makes much more sense when discussing changes iteratively, and the larger the change the more useful it becomes.
Technically you still need to subscribe to the mailing list, unless you’re one of the fellows that still knows about gmane. And it does require some decent email client, to both handle the traffic and to make patch submission/retrieval convenient. Since many devs nowadays just use web-based clients, they see mailing-list based development as an annoyance, and that’s why it gets such a bad rep. This is not exclusive to kernel development. @calvin is also right in saying that almost everyone now has a github account for either work of issue submission, you actually have less friction using github than to subscribe to a high-traffic mailing list.
It’s also true IMHO that the simplicity of github makes it easy to submit one-off patches, which is both a pro and a con. That’s fine for a typo fix, but it’s also too easy to see PRs with new functionalities, or bug fixes but with terrible code quality which the author has no intention to support beyond the initial submission: these are just a PITA for large projects, and raising the barrier to contribution does help weed them off.
Kernel dev is not supposed to be easy, so having a non-zero barrier to entry is beneficial IMHO.
zsh’s “zmv” combined with noglob (
alias mmv="noglob zmv -W"
) allows you to do basic renames with glob syntax:mmv *.txt *.csv
. More generally zmv can be used to perform copies as well. See the documentation for some more examples.This doesn’t beat vidir or rename, or more complex scripts, but it definitely has a very practical syntax which works for 90% of the time I need some quick mass renames. I still use rename, vidir or custom scripts.
Not to cast too much shade, but one of the biggest things (imho, having written both webshit and CAD software) is that the math and CS work for CAD software is genuinely hard. Like, really hard.
Instead of training our devs to do that sort of really hard and mathy stuff, we’ve raised a generation of engineers to give conference talks on amateur (not necessarily novice, but amateur) compiler design and PLT wonkery–stuff that doesn’t scale in terms of impact.
Like, it would seem to me that more humans are impacted on the day-to-day by plastic moldings enabled by computational fluid dynamics or CAD-designed tooling than benefit from ReactJS or Babel or some new exploration in type-safety. Maybe we should try to encourage more of the next generation to go after numerical/graphical stuff.
I haven’t done that kind of programming in a long time now (it’s gonna be 10 years now…) but my experience kindda matches yours…
For example, the biggest obstacle towards solving the first problem in the article (everything is single-threaded) is that the subset of people who can a) do parallel programming (in a maintainable manner), b) make heads or tails out of the math behind a CAD program, and c) understand enough about how that program is going to be used in order to come up with real improvements, not “UX improvements”, is extremely small. In what used to be my field (EM field simulation) I’ve met maybe one or two people who could do it. I definitely wasn’t one of them, I knew enough about electromagnetism to be able to understand the equations and write the programs, but definitely not enough to make any serious original contributions in this regard (in my defence, I wasn’t that interested in that particular part of EE either, it was just the only one I could work in at the time).
This is pretty obvious at every level. E.g. orange site is full of critique against CAD/CAE tools originating from programmers, and while I can’t speak for the mechanical side of things (zero expertise there), I can confidently say that on the EE end, most of it is bonkers. Lots of CAD tools are 1988-era programs with 1998-era UIs not only because they’re marketed by huge, non-software companies who are both unable to recognise and unwilling to pay for software development expertise, but also because too many people with programming expertise are too busy being right and too dogmatic (about many things, including how to build graphical software) to make any real improvement.
(You can’t just take a program for a test drive and maybe do a quick PCB design because you’ve taken up electronics as a hobby and, with a grand total of maybe 200 hours of experience spread across an year and a half, hope to figure out what the people who’ve been at it for twenty years at their day job really need. Especially when they’re operating inside an organisation, not in a living room, and with all sorts of reporting, engineering, logistics and business processes in place, some of them formal, some of them informal, all of which you have to support and facilitate. But that’s a whole other story.)
The educational gap is hard to bridge though. Maybe with enough people going into data science these days there’s some hope to it. Back in my second year of uni, when I took the introductory numerical methods course, lots of people – both students and teachers – were kindda sneering at the more programming-heavy problems in that course, after decades of Moore’s law making everything program twice as fast if you just wait a few months. (Why bother with C when Matlab does all this in a few lines etc. etc..)
I agree. I don’t think the OP understands how hard the problem space is, even ignoring half of the issues presented.
I’d argue you have better chances at working in the field if you start from a math curriculum and add the required CS bits (which are also of the non-trivial kind). I say this as also having worked in the field of CG (mostly doing topographical DBs, graphs and routing), where even seemingly trivial algorithms that can be described in half a page in “C computer graphics” can require tons of foresight to actually work without producing degenerate cases due to numerical instability. It’s fascinating, but incredibly demanding.
The proof is that, on a global level, we have very few geometrical kernels that can work on B-rep representations. All of them are decades-old and still have plenty of degenerate cases. The good ones are too expensive for hobbyists to use. On top of that, history-based modelers still rely on a ton of heuristics to rebuild the models to fill the gaps in topological changes.
I’ve been using magit extensively, however I cannot stomach its blame mode. I find it next to useless due to the way it annonates inline. I really need something with side annotation. Can anybody suggest an alternative besides vc-annotate? I’m using “tig blame” heavily for this task.