1. 8

    Maybe I’m just not over the hump yet, but my experience with Rust hasn’t been super happy. Last month I built both a Rust and a Nim binding to a C API [that I wrote and own.] I’m a newbie a both languages. The Nim binding was a joy to write, simple and clean. The Rust one involved tons of fighting with the borrow checker, and a couple of areas I just had to leave as ugly hacks because Doing it right created bizarre errors I couldn’t understand — something about interactions between lifetimes and generics.

    I don’t think I’m having the typical newbie problems understanding lifetimes. I’ve been using C since 1981 and C++ since 1990, and I and understand stacks and move semantics and where data lives. I appreciate the work Rust has done to make delicate lifetime dependencies checkable by the compiler! But it seems to have made the language extremely complex. I found a blog post about “What newbies get wrong about understanding lifetimes” last week, and it made me basically give up — trying to understand the subtle distinctions between what I thought was going on and what is actually going on just made my brain hurt, in the same way it hurts when I try to understand template metaprogramming or hyperbolic geometry.

    Honestly I’m not sure the complexity of the lifetime analysis in Rust is worth it, compared to the minor performance overhead of using some ref-counted objects (as in Nim, or my current C++ code.) But I’m willing to be convinced otherwise.

    1. 5

      You can use ref-counted objects in Rust too, and in fact I recommend people having trouble with borrow checker to use it. Reference counting is well supported in Rust, although currently it looks ugly. I believe making reference counting looks more beautiful and natural is the important next step for Rust, since it is often the right choice.

      1.  

        More ergonomic RC for Rust sounds interesting. Is there any work being done towards that goal?

      2.  

        I hear you! I hate to give the unsatisfying answer, but your description of your experience so far sounds an awful lot like mine before I got “over the hump”.

        I would posit also that your extensive experience with C and C++ may even be hindering you, in one particular way, and that is this:

        I’ve been using C since 1981 and C++ since 1990, and I and understand stacks and move semantics and where data lives.

        trying to understand the subtle distinctions between what I thought was going on and what is actually going on just made my brain hurt

        I suspect the process you’re going through is one of relinquishing control over your understanding of where data lives, its owners, and its lifetimes, and giving it to the compiler. As someone who has been writing C and C++ for a long time, and has therefore been forced to reason about these things entirely in your own head to the point where it’s second nature now, I can imagine the process of unlearning that is uncomfortable and painful.

        I wasn’t an experienced user of C or C++ when I came to Rust, though, so I’m interested to hear what you think about my perspective on this. Am I totally off base here?

      1. 17

        If I’m building a static site for a blog, I think the number one requirement will be stability of dependencies over time. I should be able to come back five years later and get it up and running without spending a day fixing all the broken things.

        Given my experience with NodeJS, I do not think it meets that standard. I’d probably give Ruby a pass too.

        Maybe something built on Rust or Go would be my first choice, since they build a single executable and don’t depend on a packaging environment to run.

        1.  

          I’m pretty happy with hugo and Netlify. I seem to be averaging a post a year lately, but my handful of shell aliases and locally installed static binary for hugo are working fine.

          The only trouble I’ve had was that I used a theme as a git submodule and the author removed the upstream repo. Which just convinces me to remove even more dependencies.

          1.  

            I built a blog with Hugo back when it was new, then came back to it a few years later and Hugo wouldn’t run (some incompatibility between Go and macOS), updated Hugo, and everything broke because they’d changed so many things with e.g. theming. I did get it working again, but it took a while.

            1.  

              Ah, well, there you see my secret. I’ll just never update Hugo!

          2.  

            I tend to agree. I wrote up a really nice SSG setup I put together with Flask, Jinja2, and Markdown, and a super simple Flask plugin called Frozen-Flask (for generating static HTML from Flask routes), which one can easily vendorize.

            https://lobste.rs/s/s91ry0/most_dynamic_static_site_you_ll_ever_see#c_jrzsib

            The nice thing about Flask and Jinja2 is that they are completely ubiquitous as a web app and template framework in the Python community, and its maintainers have essentially declared the projects “done” (+/- the odd security fix). Thus, coming back to a repo built with this setup years later, stuff Just Worked.

            Sometimes it’s best not to let perfect be the enemy of the good.

            1.  

              I have some projects on Vercel / Zeit that are 3-4 years old, no problems with them whatsoever. It does lock you in a bit but it’s not a big deal since you can always next export.

            2.  

              I mean my setup also fits the bill as it would have worked fine (except for markdown processor and rsync) in the 1980s.

              1.  

                You’ve probably moved computers a few times since then. How much effort does it take to get the project up and running without the original dev environment? That’s my biggest concern regarding continuity.

                1.  

                  Well, you need a UNIX-like machine, a server with bunch of headers and footers of indexes and atoms (which I have for my blog) then just have some processor (I did not have this setup since the 80s as I wasn’t back there nor Atom/HTTP was :), I said 80s for hyperbole’s sake and the fact that most of the setup is through basic ed commands and Bourne shell (not bash) syntax).

                  Ed sadly is less and less preinstalled in modern UNIX-like machines, but ed syntax has not changed for longer than majority of solutions here exist.

                  I believe that Lindy effect is a good guide for designing such projects (continuous). Would it work right when the internet began - in my case, except for rsync and markdown, which are both just variables for generating html, which could be replaced with something else.

                  1.  

                    That sounds like an extremely boring solution that will probably outlive me ;-)

            1. 2

              It’s interesting to read the complaints about accidentally hitting the trackpad. I have a 2018 MBP and haven’t had any problem with that, never even heard of it being a problem. It must be due to individual variation in hand position? Or maybe to people resting their wrists on the chassis (which you shouldn’t do — any pressure on the wrist while typing is an RSI trigger.)

              The Touch Bar should be amazing but isn’t. I’ve always wanted function keys that display what they do at that moment! But having them be flat spots instead of physical keys seems to wreck the usability. I guess I’ll have to wait for those keyboards with a little display in each key to come down in price.

              1. 9

                Kind of amazing the progress this project is making. On the flip side I can’t help but wonder what is attracting people to it over Haiku. They’re both BSD licensed, both use C++. Both hark back to a similar era. Both GUI first. Haiku is arguably a lot more functional and further along though.

                Some ideas that come to mind:

                • I wouldn’t be surprised if at least part of it was that Serenity is on GitHub and uses a GitHub pull request workflow.
                • Serenity looks easy to build and supports a wide range of build hosts (Linux, macOS, FreeBSD, OpenBSD).
                • Being newer it’s easier for someone to just show up with an idea, implement it and have it accepted.
                • Perhaps it appeals to a broader audience since it’s Windows inspired vs. BeOS.
                1. 9

                  Once upon a time in the early/mid ’00s, there were a plethora of operating system projects that existed “just for the heck of it”, and most of them are essentially dead at this point for one reason or another. Haiku had a purpose that most of them did not: an unrealized vision of a better future for computing.

                  To me, at least, SerenityOS feels like a callback to those days when developers got together and learned something about computers and operating systems by building one. Which is ultimately pretty cool; but it’s a pipe dream to think you will be able to use it as your primary OS anytime soon. Most of the people working on SerenityOS seem to be doing it for the fun of it, which is great! Obviously Haiku has a ton of “solved problems” that SerenityOS, being newer, does not, and you can learn a ton by working on it.

                  But in terms of being a realistic possibility for a “daily driver”? Yeah, SerenityOS is years and years away from that. And at least when this project first made the rounds, I know some of the developers at least then said that they wanted to get there. That’s not a sentiment uncommon to new OS developers, but, well, Haiku has been around for two decades, and once made progress as rapidly and as impressively as SerenityOS, but as you can see, our install-base is still rather small, and people still have lists of things that we would need to do in order for them to make the jump.

                  It’s also worth noting here that SerenityOS has (or had?) a policy of not using any imported code whatsoever, even for things like ACPI, where Linux, *BSD, Haiku, etc. all use Intel’s ACPICA (and even the OSDev wiki recommends hobbyist OS developers do, too, simply because of how absurdly complicated ACPI is), or the libc, or the coreutils, or the shell, or any number of other things which Haiku et al. reuse from one another. That means that SerenityOS has a task ahead of it that is unbelievably massive even in comparison to Haiku, which uses the GNU coreutils, bash, musl’s libm, etc. and does not completely and totally re-create every wheel. Again, doing those things is an excellent way to learn; but it’s more or less incompatible with using the system as an actual daily driver.

                  1. 3

                    Linux, *BSD, Haiku, etc. all use Intel’s ACPICA

                    Except OpenBSD. They have to deal with some fun bugs due to their own implementation sometimes :)

                    1. 1

                      Inspired by this thread, I spent some time on the SerenityOS issue tracker. I am interested in a well-designed permissively licensed OS that is not written in C. I was not convinced that SerenityOS is going to be that system.

                      It currently targets x86-32 and has a single userspace ABI. Adding good layering for these with a clean set of abstractions is really hard to get right and causes massive pain later on if you don’t. Their approach is to just incrementally refactor to get x86-64 support, without thinking about a final design. Once you have two architectures supported, cleaning up the abstractions is hard because your testing burden is high. Once you have three, your mistakes are basically baked in forever.

                      The BSD family was quite lucky here, because the VAX port required them to think hard about these abstractions, get them wrong, and then copy the ones that Mach built based on their experience. Linux was less fortunate and so ended up with a split between architecture-specific and architecture-agnostic code that is quite painful in some places (for example, system call numbers are architecture specific, managing signal delivery for the product of architecture and ABI is quite ugly).

                    2. 11

                      I’ve played a lot with Serenity and made a couple of modest contributions. I have also played about with Haiku a little bit. For me, I was more attracted to the former, for a few reasons. Firstly, the GUI is much nicer - it’s almost exactly what I want in a classic style desktop environment, and makes me feel a little nostalgic for the Windows interfaces on which I learnt to use computers. For the most part I find Haiku’s interface to be a little ugly - although I do love the boot screen! Secondly, Serenity is a clean sheet design - built in a thoroughly modern way simply in accordance with the intuition of Andreas and the other developers, rather than in an attempt to cling on to compatability with an obscure OS that was dead before I was even old enough to use a computer. Thirdly - and this is mostly as a consequence of Andreas’ videos - Serenity felt to me like a system that was alive and blossoming, that I could jump into and make a difference to, while Haiku seemed to me an anachronism kept alive by a cabal of mysterious maintainers who refuse to let go of the past. I’m sure that’s not the case, and that Haiku’s community is welcoming and forward thinking, but it’s hard to be inspired into lending a hand by simply seeing a new set of patch notes every two years.

                      TL;DR - Haiku is clinging to the past while Serenity is taking interfaces of the past back to the future.

                      1. 13

                        For the most part I find Haiku’s interface to be a little ugly

                        If you are speaking purely of the “look and feel” of Haiku, why not just write a Windows “Decorator” (window border styling) and “Control Look” (control theming)? You could get almost a pixel-perfect recreation of the Serenity GUI on Haiku. We personally just like the way Haiku looks now, but anyone can customize it!

                        rather than in an attempt to cling on to compatability with an obscure OS that was dead before I was even old enough to use a computer.

                        I think you will find that we are more modern-minded than even Linux in terms of how the system is put together. Maybe not quite as modern-minded as Serenity, but the Be origins have not constrained us. The package filesystem is proof enough of that, as are the use of C++ in the kernel and quite a lot of other things under the hood.

                        while Haiku seemed to me an anachronism kept alive by a cabal of mysterious maintainers who refuse to let go of the past

                        Dude, when I started contributing to Haiku the better part of a decade ago, I was in high school. We’re not all (or even at this point, mostly) “old geezers”. We have forums, an IRC channel, mailing lists, a bug tracker, and it is pretty easy to see who we are; and our technical decisions are pretty good proof that we absolutely know how to let go of the past.

                        but it’s hard to be inspired into lending a hand by simply seeing a new set of patch notes every two years.

                        We’ve been publishing monthly Activity Reports on the blog detailing what’s been going on in the Haikusphere for multiple years now, and new software (and screenshots) appear in the Depot on a weekly basis.

                        1. 4

                          If you are speaking purely of the “look and feel” of Haiku…

                          My impressions are naturally, if unfairly, formed off what the system looks like in its default state, not what could be achieved with a weekend’s worth of programming.

                          I think you will find that we are more modern-minded than even Linux…

                          Fair enough. I always got the sense skimming through the project that it was a bit tied down by its adherence to BeOS but I happy to be wrong on this point.

                          We’re not all (or even at this point, mostly) “old geezers”.

                          Again, I don’t doubt you’re right, but the impression I got of the community was of a very old project, and the assumption I made from that was that it would be quite set in its ways. Perhaps I am completely wrong about that.

                          We’ve been publishing monthly Activity Reports on the blog

                          Like most people, the honest truth is that with the exception of a few blogs that I go out of my way to check, I only really see what bubbles up on HN, lobsters, Reddit, /g/, etc. Hence my exposure to Haiku is pretty limited.

                          I really don’t want my original post to be interpreted as ‘this is why Serenity is better than Haiku’. My intention was to rather explain ‘this is why a bored student browsing the techy parts of the internet might be more drawn to Serenity than to Haiku’. I have a great deal of respect for your project and your comments in this thread have inspired me to perhaps check in on it with a little more regularity :-)

                          1. 2

                            The website indeed could use a refresh with more information as to what we do and what we are about, sure. But, I mean, if you go look at Fedora or Ubuntu or something, are their websites really that much more engaging as to getting involved with the project? Not really. So it’s a hard balance to find for us; because on one hand we are initially if not ultimately targeting the same market the “Linux Desktop” is, while we have a fraction of both the volunteers or the financial support they do.

                            1. 1

                              It’s perhaps natural, then, that hobbyists with a bit of time on their hands are more likely to feel able to get stuck in with a GitHub project which features, front-and-centre, a YouTube channel of a guy making near-daily coding logs, rather than something like Haiku, which - to its credit - looks far more like a professional endeavour than an amateur collective’s labour-of-love.

                              1. 1

                                Well, the “ports” portion of the Haiku project lives on GitHub, and there is a GitHub mirror of the main repository with a very friendly README.

                                Yes, we are more focused on actually getting development done in what precious little spare time we have than making YouTube video logs about it. But Kyle Ambroff-Kao, one of the newer names (he was granted commit access last month :) has started doing development screencasts, so maybe some of us do have the time…

                        2. 7

                          I’m not sure I completely agree with some of your characterisation of Haiku but I get your point.

                          and makes me feel a little nostalgic for the Windows interfaces on which I learnt to use computers. For the most part I find Haiku’s interface to be a little ugly

                          It’s funny, I grew up on Mac OS and consider classic Windows supremely ugly. To me Haiku (or Platinum Mac OS) is my ideal classic vibe. So, for me the appearance of SerenityOS puts me off a little, I guess in the same way Haiku might for you. :)

                          Anyway, thanks for responding. It seems my intuition might be on the right track. I’m interested to watch how the project progresses.

                          1. 3

                            From its home page, SerenityOS’s key selling point seems to be it’s “a love letter to ’90s user interfaces” … something I don’t grok at all, the 90s being that awkward age of “wow, if we color the top and left edge darker and the bottom and right lighter. It looks like it’s inset!!” in GUI design. But at least they’re not aping Motif…

                            And this matters, because all those sharp contrasts and hard lines create a ton of visual noise that makes it hard to parse the interface and focus on the important stuff. I freely admit today’s GUIs have their problems and silly fads, but they’re so much better.

                            Behind the GUI, I don’t see the website describing any new and different architecture that would entice me to work on this, or pick it for a desktop over a stable Linus or BSD distro.

                            tl;dr: You damn kids and your “retro” stuff! You don’t know how much better you have it now than in the old days. Now turn off that “vaporwave” and get off my lawn!

                            1. 2

                              I am completely willing to put my hands up in the air and say that my impressions of Haiku are just that - my impressions - formed from the collective sum of the few times I’ve seen the project pop up on aggregator sites and an hour and a half playing with an ISO in qemu. That is to say, I am in no way qualified to make reasonable assertions about the Haiku project, either technically or with regards to its community. At the end of the day, Serenity just captured my imagination more than Haiku, and that’s what ultimately matters when it comes to deciding whose codebase to spend your afternoon trawling through.

                        1. 1

                          The trackpad Does sound awful. I wonder whether the fixed click locations are a hardware limitation, or a misfeature of the firmware/driver? If the latter, hopefully there are fixes available. I could be tempted to buy one of these as a cheap-and-cheerful Linux box.

                          (I’m a Mac guy since forever, but I’ve used a couple of PC laptops, and on the cheap ones the trackpad has been a total deal-breaker. Seems like a stupid place to skimp on the mfg budget … like selling a compact car with springs poking out of the seat.)

                          1. 4

                            At least on the general software end, consider supporting libinput work: https://bill.harding.blog/2020/05/17/linux-touchpad-preliminary-project-funding-survey-results/

                            (bias: I’m promoting this whenever trackpad linux criticism comes up.)

                            1. 10

                              I’m with Linus. 80-character limits one the 21st century are an anachronism. The legibility studies being mentioned here are about people reading natural-language prose, in long paragraphs that wrap. The shorter line length helps the eye Jump accurately from the right margin to the start of the next line.

                              Code isn’t read that way, in any language I know of. It’s short individual lines, with no consistent length, and in most languages they’re indented to varying levels. We scan it in all different ways depending on what we’re doing — I bet someone’s done eye tracking studies, but thinking about myself, a lot of the time I’m skimming the start of each line looking at the statement keywords (if, for, let, return…) and indentation. Or I’m looking at all the instances of one variable name that my editor’s highlighted for me.

                              Consider also that the real meat of the code, the function bodies, is usually indented, which eats away at the line limit. In C++ I’m usually inside a method in a class declaration in a namespace … that’s 12 spaces subtracted from the line width. I go into an if or `for, another 4 gone. That’s 40% of an 80-char line lost before I start typing anything! (I know some people use 2-space indents to work around this. I find that’s too narrow for me to ‘read’ indentation accurately.)

                              1. 3

                                And I get the feeling also that many languages that use 2 character indent by convention, do so in order to keep lines within these limits. I also find 2 spaces borderline too narrow, despite using it in Ruby code daily (where it is so standard that you’re really better off just going with convention for the sake of the community).

                              1. 10

                                NetNewsWire. I installed it quite recently, not having used RSS since Apple dropped support in Safari and Mail.

                                1. 12

                                  I (co)wrote that RSS engine in Safari/Mail, which was going to be a much grander thing indeed, with support for publishing to blogs and over P2P, before schedule constraints and executive whims whittled down to what it shipped as. (Then I left Apple.)

                                  For a couple of years I used a homemade app built on that framework, but nowadays I use Feedly because it syncs between my devices and has a tolerable UI ¯_(ツ)_/¯

                                  1. 1

                                    Wow that sounds like it would have been very interesting if it had come completely to fruition.

                                  2. 1

                                    Yes! I exclusively use the iOS version, since I don’t have a Mac, and I’ve been very happy with the app. Performance, design, and platform integration are all really good.

                                  1. 3

                                    I understand the comments about D and Rust. But what does the C preprocessor have to do with all that?

                                    1. 9

                                      By ‘CPP’ they mean “C++”.

                                      1. 8

                                        I get this confused all the time because CPPFLAGS is the preprocessor and CXXFLAGS is C++. You’d be surprised how many programs are out there incorrectly setting their build time env which leads to further confusion.

                                        1. 2

                                          I just learned something today, and confess to doing just that!

                                          1. 1

                                            Spread the word, this is so common!

                                        2. 2

                                          That’s how I read it, too. Probably because I learned C++ on Borland, which used .cpp rather than .cc

                                      1. 5

                                        Obscurity in code and, in particular, hidden control flow as defined here, is a topic I’m currently interested in. Does anyone have any articles in their bookmarks that build compelling arguments for things like getters/setters, @properties, key value observing, and other invisible/counterintuitive side-effect-generating mechanisms?

                                        1. 15

                                          I don’t think there’s a definitive argument either way – like all things in programming, it depends on the problem domain and use case.

                                          For low level code (interfacing with hardware, with the kernel, etc.), I can definitely see why you want to avoid hidden control flow. For application code, you think at a higher level of abstraction, so you may not care about every last resource. It’s a tradeoff.

                                          As a concrete example I was “trained” not to use exceptions in C++ by 2 jobs, but for my shell project I found that they were pretty much essential for recursive evaluators (and I believe faster than explicit error checks in this case). It depends on the problem domain.


                                          Also note that C does have hidden control flow with longjmp (used in almost all shells, Lua, etc.) and “hidden” side effects, e.g. errno which is a global / thread local.

                                          1. 1

                                            Yes, definitely. I was just curious to read accounts of specific cases where they might offer an undeniable benefit over the more explicit alternatives, to tweak my understanding of their raison d’être.

                                          2. 3

                                            What property gives you is pretty simple. There is need for migratability from field access to more complex getters and setters. So either you never use field access (always define getters and setters and use them), or have property, or lose migratability. Assuming losing migratability is not an option, having property allows avoiding writing getters and setters.

                                            Personally I think property is a wrong tradeoff, because macros can write getters and setters for you. Java’s Project Lombok is an example of what I consider the right solution here.

                                            1. 1

                                              I think properties were a mistake – they may be an ok hack in languages that have already shipped with incompatible syntax for fields and methods though.

                                              Instead, by simply allowing to leave out the () for methods without parameters, the implementer gains the ability to switch from fields to methods and back, without breaking calling code at all.

                                              This design makes the whole matter a complete no-problem, without the need to introduce a weird third way of doing things that keeps sprouting new complexity every once in a while (like properties do in C#).

                                            2. 3

                                              They’re layers of abstraction, just like everything else. Function calls are a layer of abstraction over the bare JSR/RET instructions the CPU provides. Structs are an abstraction. So are variable names. A die-hard asm programmer might argue that these are invisible or counterintuitive — “I see ‘x’ here and I don’t know what it is! I have to look up above and it says it’s a Foo, but I don’t know how big a Foo is. It might even be a typedef for a pointer! Feh! Get off my lawn!”

                                              Every program builds its own abstractions to model the entities and operations it works with. I like these to,be expressed in the syntax I write, like the ones hard coded in the language. Otherwise I have to keep repeating their innards over and over, or expressing them in ways that are less clear.

                                            1. 13

                                              I had a brief flirtation with Zig, but I decided it’s too low-level for me. The lack of global allocator means every function that might allocate/free memory has to take an extra allocator parameter, or stash an allocator reference in whatever data structure it’s passed as a parameter. This feels like too much work to me— I’m sure in super low level code the benefits are worth it, but even at the medium-low level I work at memory allocation isn’t that special a thing.

                                              The “every call is explicit” feature doesn’t work for me either. I think this may be a case where there are two types of programmers. Some like everything explicit, some prefer abstraction. I’m in the latter camp. I don’t go as far as building my own DSLs, but I like a layer of custom infrastructure so I can describe my code at a higher level.

                                              Zig is cool and I’m glad it exists :) but I’ve gone with Nim as my new BFF … just don’t tell C++, ‘cause we still have to work together.

                                              1. 5

                                                I think things as low level as zig work great with higher level languages like lua/python/lisp to provide the easy to use parts. The fact that zig doesn’t enforce a memory policy is handy for cases like that.

                                              1. 3

                                                I also find logging in with a strong password on mobile annoying

                                                Maybe it is on Android, but I don’t find it so on iOS. My Keychain is synced (E2E) across all my Apple devices, and Safari auto fills passwords. It also prefills a strong password when registering and saves it to the Keychain.

                                                I’m still waiting for when we can just use client-side certs for auth, but I also understand all the roadblocks. At least with FIDO2 it’s getting closer.

                                                1. 1

                                                  I found it even worse on iOS since it asks my friggin’ iCloud password every damn time.

                                                  An iPhone was also the only Apple device I ever had, so it didn’t really have an option to sync with my Linux machines. There are probably solutions for this, but I don’t use my phone much and never bothered to look at it.

                                                  1. 2

                                                    Huh? It should not ask for your iCloud password to fill a password.

                                                    It should ask you to authenticate to the device - for most people this will mean TouchID or FaceID, with a fallback to the device pin.

                                                    1. 2

                                                      I no longer have my iPhone so I can’t check, but it asked my iCloud password a lot of times. The iPhone SE doesn’t support Face ID and I didn’t use touch ID as that’s not very reliable for me. I live in Indonesia where it’s very hot and humid and I’m very European (I once got a sunburn in Ireland; IRELAND!) and sweat a lot, and the touch ID doesn’t seem to work very well with sweaty fingers; just using a PIN worked much better for me.

                                                      I guess it’s one of those edge-cases where Apple’s “just use [..]” no-configure approach is rather sub-optimal.

                                                1. 4

                                                  Objective-C. It’s frowned upon but it’s doable. (Back in 2010 Chromium on Mac used to have an example of this, as part of its then-very-unusual UI with the tabs sticking into the title bar. This required messing with NSWindow.)

                                                  1. 0

                                                    “I hear your band are selling their turntables and buying guitars. … I hear your band are selling their guitars and buying turntables.” —LCD Soundsystem

                                                    1. 3

                                                      I didn’t read the whole article carefully, but I saw references to sample rates well above 44Khz, and to upper harmonics well above hearing range. So I wonder if to some degree this technique can be thought of as serial, vs. regular PCM audio being parallel. That is, instead of having more simultaneous bits of amplitude (y axis), you use higher resolution in the x axis to compensate.

                                                      I’ve heard of 1-bit audio amplifiers: the amp is either putting out full power or nothing, but it switches extremely rapidly, much faster than speaker cones can keep up, so the cones end up interpolating the waveform (with very high fidelity, apparently.)

                                                      1. 12

                                                        I’m so old, this reminded me of back when $FLASHY_LANGUAGE was Java 🤣

                                                        1. 15

                                                          I worked at Google from 2008-2010, after a 15-year tenure at Apple. The whole experience was weird. What struck me the most about the interview process was that I wasn’t being considered for a particular team, and the people interviewing me were from various groups around the company (and I never met them again.) It wasn’t until after I passed that the recruiter started talking about specific positions.

                                                          The attitude seemed to be that engineers are a generic commodity — as long as they’re smart enough, as established by whiteboarding sort algorithms, the company can grab them up and drop them into any project.

                                                          And while it’s true that smart people are good at learning stuff quickly, that attitude has a disregard of human factors and team dynamics. It’s the only company I’ve ever worked for where I wasn’t interviewed by people whom I’d be working with.

                                                          I don’t know what Google is like now, but back then it felthuge and impersonal, somehow simultaneously chaotic and bureaucratic, and I never felt I fit in. I left for a startup after 21 months and never looked back.

                                                          1. 3

                                                            The rationale behind this was that it was necessary for internal mobility to be high. Now that the company is so much bigger than it was back then, this is a bit less the case & people are allocated to a specific product area in advance, at least.

                                                          1. 8

                                                            I think this looks great.

                                                            Someone on the orange site (which I really don’t know why I keep going to, habit I guess) was upset saying “they’re spread too thin”, which seems entirely wrong. It just means that Microsoft is investing heavily. That is only good. Heck, I’d consider a job if they’d set up an office in Santa Cruz :)

                                                            But I do wish that they’d invest in GitHub Issues and code review. I don’t think anyone can say with a straight face that they like them. There has to be a better way, perhaps a V2 that runs alongside using the same database but allowing a V1 UI for the people that can’t let it go. Gerrit has a better code review system, I don’t know about GitLab. There are certainly examples that can be cribbed from.

                                                            EDIT: I’d also like to see them expand the remit of their CLI project to try and rethink the Git terminal workflow. Again, as above, I don’t think many people like it. It’s grown organically and a new way of interacting with Git knowing what we know now could be huge.

                                                            1. 5

                                                              I really like GitHub Issues. My company forced my team to switch to Jira a few months ago, and it’s miserable. I can’t believe that in 2020 there are still major commercial web-apps as shitty as Jira. They can’t even get their textareas to work right; half the time when I hit Submit I watch it mangle my markup.

                                                              1. 4

                                                                I’ve used GitHub Issues for over 10 years and like it. Of the 10 or so issue tracking tools I’ve used, it’s my favorite.

                                                                1. 3

                                                                  Heck, I’d consider a job if they’d set up an office in Santa Cruz :)

                                                                  Besides the unusual circumstances of everyone working from home, GitHub is very remote-friendly. You should apply! (If you meant Microsoft, I can’t help there…)

                                                                  All I can say about the other things is stay tuned. These were some impactful releases today, but we’re not done yet.

                                                                  1. 2

                                                                    I also like Github issues. I’ve used issue trackers for something like 18 years, literally every day at points, and I can’t think of one that’s clearly better than Github. I’ve been using it regularly for at least 4 years.

                                                                    Like all software, it can be improved more (e.g. latency), but I can find about 100 other pieces of software to complain about first.

                                                                  1. 7

                                                                    I think this is encountering some of the pitfalls of micro-benchmarks.

                                                                    • It’s not clear whether the Go benchmark framework accounted for GC time. It probably wouldn’t unless the benchmark allocated enough to fill the default heap, or the measurement phase ended with an explicit call to run the GC.
                                                                    • IIRC Go’s GC is concurrent, so the Go version is making use of concurrency where the Rust version isn’t. (Which can be seen a a point in favor of Go. But it may be that the Go version uses more CPU time overall, it’s just not being measured.)
                                                                    • This code is kind of a perfect case for a scavenging GC because none of the allocated memory persists: it’s all garbage. Also, none of the heap blocks contain pointers, so there’s no time spent tracing through object graphs. All the GC has to do is preserve the one heap block currently In use.
                                                                    1. 10

                                                                      I’ve always found it harder to read dark-mode text, and I’m surprised other people don’t. Looking at a dark screen causes your pupils to enlarge, and that decreases the “pinhole-camera” effect, meaning any imperfect focusing by the lens is more apparent. So it aggravates my astigmatism, even when I wear glasses.

                                                                      1. 4

                                                                        My theory is that many people have their brightness way too high, so the light background is a massive amount of light in your face. Dark mode is preferable in that case. I don’t have any evidence of this, just observation; even many people who spend their entire days in front of a screen never seem to touch their screen’s settings.

                                                                        It doesn’t help that adjusting the brightness on many modern screens is a bit of a PITA; I miss the analogue PODs of old CRT screens.

                                                                        1. 1

                                                                          My theory is that many people have their brightness way too high,

                                                                          I always target 100 cd/m2, yet it is still too bright.

                                                                          Black text on light background is OK. But not on white background; That’s too much!

                                                                          1. 2

                                                                            may I ask how you measure luminousity WRT your monitor? reading the grandparent comment made me turn down my brightness to test it out for a bit, at the least.

                                                                            1. 1

                                                                              I don’t. Fortunately tftcentral.co.uk did the hard work of mapping brightness levels with actual brightness for me, on their review.

                                                                              1. 2

                                                                                oh cool! I was able to find measurements for my own monitor by searching the model as well – thanks for the pointer.

                                                                        2. 3

                                                                          On the Amiga, I did prefer Black-on-lightgrey OS2+ color scheme over the white on blue OS1.x color scheme.

                                                                          However, I can’t stand white backgrounds. Sure, the contrast is higher this way, but it is very uncomfortable.

                                                                          Thus I prefer dark mode, as long as I can’t get AmigaOS-style black on grey.

                                                                        1. 4

                                                                          Dark-mode displays emit less light than light-mode ones (and, because of that, they might extend battery life)

                                                                          Not with LCDs: black pixels just block out the backlight, but the backlight still uses the same amount of power. (Some LCD TVs dim the backlight over large dark regions of the screen, but I don’t think monitors do that, and if they did it would ruin the contrast of the text.)

                                                                          OLEDs do use less power for dark pixels, I believe.

                                                                          1. 3

                                                                            On LCDs, white pixels actually use /less/ power because blocking light is what requires energy. That’s tiny compared to the backlight but I measured it (or at least I think I managed to do it) on my laptop (almost 4K screen from years ago).

                                                                            With OLED, there is no backlight and dark pixels are not using power. That also gives better contrast ratios because black is really black. The fact that OLEDs don’t use power for black pixels are also one reason we’re seeing dark themes by large companies, especially on phones.