1. 2

    I remember being excited when nano first came out, because it meant I wouldn’t have to install the whole pine package just to get pico.

    1. 1

      Woah, I had no idea that pico was part of pine, nor that pico is 11 years older than nano. I would have never guessed!

      1. 1

        I read my mail with pine for years in the late 90s and into the early 00s, yet I forgot about it until now.

        Really ugly with no support for threads.

        It may have improved, but I love my mutt too much to consider going back ;)

    1. 2

      pico < nano < micro

      1. 5

        mili < vim < kilo < mega < giga < emacs

        1. 2

          it’s just a static binary

          Erm…

          $ file micro-1.4.1/micro 
          micro-1.4.1/micro: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, Go BuildID=5a83ed8300296d2d29c7c21b668bda0a1db5fa7ba, stripped
          

          (This seems to be a common misunderstanding these days.)

          1. 3

            I think a lot of people use “static binary” to mean “doesn’t link against anything other than libc” now. Not that that’s correct, but it’s certainly how the Go world tends to use it, which is unfortunate.

          2. -1

            Not sure I want an editor to be written in Go, to be honest.

            1. 3

              Why not? As with the parent comment you replied to I would very much like to know why you think so. The reasoning behind a claim is often more interesting than the claim itself.

              1. 1

                Some languages just come with a smell of lower quality. If something is written in a language like JavaScript, PHP or Go, I just immediately assume that the engineering standards are lower then some alternative written in e. g. Rust, F#, Haskell – or even C and C++.

                I guess some languages are just more attractive to the “worse is better” crowd, and I have become wary of the resulting software.

              2. 2

                Lets be honest here, on a single user system 35kb of ram vs 1 meg of ram doesn’t change much… But given the choice one is better.

                1. 1

                  Any specific reason why?

              1. 5

                This is an interesting perspective. Especially the part where some people think writing some tests slows things down.

                Imo tests are a safety harness, don’t always work but they allow you to move faster without the PTSD that the author describes in their post. Tests are not 100% bulletproof, but they aren’t exactly useless either.

                1. 5

                  I once stood in astonishment when I asked an engineer that moved within Google to Android how he liked his new team: “It’s awesome! We don’t have to write our own tests [like engineers on regular Google products do], so you write code so much faster.” When I immediately told him about my bootlooping Nexus 5x, his only response was a concerned “oh, it shouldn’t do that.”

                  1. 3

                    Wasn’t the 5x a hardware failure though?

                    1. 1

                      Most of the issues around the 5X were due to issues with heat dissipation, yup. Overheating would cause things to fail pretty badly; most of the time it meant the phone was CPU-throttled to keep heat down, but sometimes it would heat up so quickly on boot it would trigger a watchdog which would power down the phone.

                    2. 3

                      I used to work on Android, and I definitely had to write unit tests for some of the stuff I wrote. The tests we didn’t write were integration and UI tests; instead, we would write specs for QA people to run through on a daily or weekly basis. I would agree with your friend, though: not writing UI tests but still getting the benefits of having the UI tested was 100% a delight.

                  1. 9

                    IRC and I are the same age. It’s been 16 years of using IRC for me by now, and I’m still to see any real alternatives really take off. XMPP sadly died. Matrix is promising, but most people seem to still use it as an IRC bridge.

                    1. 4

                      Matrix makes quite a fine IRC bridge though. Better mobile support, lets you see a list of when you were pinged, and image hosting. These days it has almost every feature I need to switch from telegram but the client is still too awkward to use.

                      1. 2

                        and I’m still to see any real alternatives really take off

                        Slack. ;)

                        1. 3

                          The largest slack server I’m on has 70 people. That’s 1/4 of the number of nicks in #lobsters, half of whom are regular participants. Our channel is only the ~150th largest channel on Freenode. There are some significantly larger channels.

                          I don’t know what the largest Slack channel is (there surely must be some much larger than the largest one I’m on), but I don’t really see Slack going after that kind of audience. Slack feels to me like a meeting or conference room, whereas IRC feels like an auditorium or a stadium. It has tooling and social conventions to accommodate large, public audiences. I haven’t seen that replicated on other chat platforms.

                          Slack has been undeniably successful and has taken users from IRC in being so. I think it accomplished this through market segmentation, though, and isn’t trying to solve some of the scale problems IRC has solved.

                          1. 5

                            When Slack kicked Reactiflux off the platform for having too many members, they had 7,500 members. Currently, Reactiflux on Discord has 35,000 members. At least one estimate puts freenode at ~88,000 users.

                            1. 5

                              There are some enormous Discord “servers” (which is a total misnomer – they aren’t dedicated servers afaik, but it’s a word that resonates with gamers); maybe Discord would be a better spiritual successor from a scale perspective. I’m not sure what the biggest Discord is, but the biggest streamer I could think of (Ninja) has 40K people in his Discord, 8K of which are signed in right now (on a weekday during a workday/schoolday). These big-name streamers have big fan communities that use Discord a lot like I’ve always used IRC: partially for asking for help, but mostly for dumb jokes :)

                              1. 3

                                I was just addressing the “alternatives take off” part. I agree they might be targeting different segment. I also think they did better job focusing on UX. The next alternative that addresses the segment you’re describing should similarly focus on good UX. Maybe charge for hosted versions or something to pay for developers to keep it a polished product, too. Users hate buggy software when their prior software worked well. They’ll switch back if they can.

                          1. 12

                            this is really clever, and definitely something that I could see Yubikey selling directly: one black, one red - black for daily use, red for recovery, with a “order another pair” stamped right on the red one.

                            1. 2

                              Agreed, this is absolutely brilliant. Well done diamonomid!

                              Anyone know of a place in the US where I can buy a pair of u2f-zeroes? The website links to amazon but they’re sold out.

                              1. 1

                                Not in the US, but if you have a European delivery address, I still have stock in my (official) European distribution https://u2fzero.ch

                                Conor is hard at work restocking Amazon :)

                            1. 4

                              I’m gonna sit in the smug corner for people running AMD.

                              Otherwise, this is all kinds of “very very bad”. The kind of where in a cartoon you’d have sirens spin up to warn of incoming air raids.

                              1. 3

                                Because SEV has been that much better?

                                1. 2

                                  The most secure system is the system no one uses :)

                              1. 7

                                I’ve wanted to do this for forever! I actually bought one of the Waveshare 7.5” displays with the intention of hooking it up to a TTY. I wanted to get to the point where I could have a small laptop (probably powered by a raspberry pi) that I could use at the park to do programming and emails and such.

                                The challenge with the 7.5” displays is that you have to build your own interface board; unfortunately, it seems like messing with e-ink displays involves a lot of yak shaving right now.

                                1. 2

                                  Paperlike is an actual e-ink monitor, but it is pretty pricey. It’s also HDMI only (no DP).

                                1. 0

                                  This is ill-advised.

                                  You cannot define 1/0 and still have a field. There’s no value that works. Even when you do things like the extended real numbers where x/0 = infinity, you’re really just doing a kind of shorthand and you acknowledge that the result isn’t a field.

                                  You can of course define any other algebraic structure you want and then say that operating on the expression 1/0 is all invalid because you didn’t define anything else and no other theorem applies, but this is not very helpful. You can make bad definitions that don’t generalise, sure, definitions that aren’t fields. But to paraphrase a famous mathematician, the difficulty lies not in the proofs but in knowing what to prove. The statement “1/0 = 0 and nothing else can be deduced from this” isn’t very interesting.

                                  1. 1

                                    Could you explain why, formally, defining 1/0=0 means you no longer have a field?

                                    1. 7

                                      I want to make an attempt to clarify the discussion here because I think there is some substance I found interesting. I don’t have a strong opinion about this.

                                      The article actually defines an algebraic structure with three operators: (S, +, *, /) with some axioms. It happens that these axioms makes it so (S, +, *) is a field (just like how the definition of a field makes (S, +) a group).

                                      The article is right in saying that these axioms do not lead to a contradiction. And there are many non-trivial such structures.

                                      However, the (potential) issue is that we don’t know nearly as much about these structures than we do about fields because any theorem about fields only apply to (S, +, *) instead of (S, +, *, /). So all the work would need to be redone. It could be said that the purpose of choosing a field in the first place is to benefit from existing knowledge and familiar expectations (which are no longer guarantteed).

                                      I guess formally adding an operator means you should call it something else? (Just like how we don’t call fields a group even though it could be seen as a group with an added * operator.)

                                      This has no bearing on the 1/0 = 0 question however, which still works from what’s discussed in the article.

                                      1. 1

                                        As I understand it, you’ve only defined the expression 1/0 but you are saying that /0 isn’t shorthand for the multiplicative inverse of 0 as is normally the case for /x being x^-1, by definition. Instead, /0 is some other kind of magical non-invertible operation that maps 1 into 0 (and who knows what /0 maps everything else into). Kind of curious what it has to do with 0 at all.

                                        So I guess you can do this, but then you haven’t defined division by zero at all, you’ve just added some notation that looks like division by zero but instead just defined some arbitrary function for some elements of your field.

                                        If you do mean that /0 is division by zero, then 1/0 has to be, by definition, shorthand for 1*0^-1 and the arguments that you’ve already dismissed apply.

                                        1. 4

                                          The definition of a field makes no statements about the multiplicative inverse of the additive identity (https://en.wikipedia.org/wiki/Field_(math)#Classic_definition). Defining it in a sound way does not invalidate any of the axioms required by the field, and, in fact, does define division by zero (tautologically). You end up with a field and some other stuff, which is still a field, in the same way that adding a multiplication operator on a group with the appropriate properties leaves you with a group and some other stuff.

                                          The definition of the notation “a / b => a * b^-1” assumes that b is not zero. Thus, you may define the case when b is 0 to mean whatever you want.

                                          That people want to hold on to some algebraic “identities” like multiplying by the denominator cancels it doesn’t change this. For that to work, you need the assumption that the denominator is not zero to begin with.

                                          1. 1

                                            In what way, whatever it is you defined /0 to be, considered to be a “division”? What is division? Kindly define it.

                                            1. 3

                                              Division, a / b, is equal to a * b^-1 when b is not zero.

                                              1. 2

                                                And when b is zero, what is division? That’s the whole point of this argument. What properties does an operation need to have in order to be worthy of being called a division?

                                                1. 3

                                                  Indeed, it is the whole point. For a field, it doesn’t have to say anything about when you divide by zero. It is undefined. That doesn’t mean that you can’t work with and define a different, but still consistent, structure where it is defined. In fact, you can add the definition such that you still have the same field, and more.

                                                  edit: Note that this doesn’t mean that you’re defining a multiplicative inverse of zero. That can’t exist and still be a field.

                                                  1. 1

                                                    In what way is it consistent? Consistent with what? As I understand it, you’re still saying that the expression 1/0 is an exception to every other theorem. What use is that? You still have to write a bunch of preconditions, even in Coq, saying how the denominator isn’t zero. What’s the point of such a definition?

                                                    It seems to me that all of this nonsense is about not wanting to get an exception when you encounter division by zero, but you’re just delaying the problem by having to get an exception whenever you try to reason with the expression 1/0.

                                                    1. 3

                                                      I mean that the resulting structure is consistent with the field axioms. The conditions on dividing by zero never go away, correct. And yes, this is all about avoiding exceptions in the stack unwinding, programming language sense. The article is a response to the statements that defining division by zero in this way causes the structure to not be a field, or that it makes no mathematical sense. I am also just trying to respond to your statements that you can’t define it and maintain a field.

                                                      1. 1

                                                        It really doesn’t make mathematical sense. You’re just giving the /0 expression some arbitrary value so that your computer doesn’t raise an exception, but what you’re defining there isn’t division except notationally. It doesn’t behave like a division at all. Make your computer do whatever you want, but it’s not division.

                                                        1. 5

                                                          Mathematical sense depends on the set of axioms you choose. If a set of axioms is consistent, then it makes mathematical sense. You can disagree with the choices as much as you would like, but that has no bearing on the meaning. Do you have a proof that the resulting system is inconsistent, or even weaker, not a field?

                                                          1. 1

                                                            I don’t even know what the resulting system is. Is it, shall we say, the field axioms? In short, a set on which two abelian operations are defined, with two distinct identities for each abelian operation, such that one operation distributes over the other? And you define an additional operation on the distributing operation that to each element maps its inverse, except for the identity which instead is mapped to the identity of the distributed-over operation?

                                                            1. 2

                                                              It’s a field where the definition of division is augmented to include a definition when the divisor is zero. It adds no new elements, and all of the same theorems apply.

                                                              1. 1

                                                                I’m bailing out, this isn’t a productive conversation for either of us. Sorry.

                                                                1. 1

                                                                  You are correct. The field axioms are all still true, even if we extend / to be defined on 0.

                                                                  The reason for this is that the axioms never “look at” any of the values x/0. They never speak of them. So they all hold regardless of what x/0 is.

                                                                  That said, even though you can define x/0 without violating axioms it doesn’t mean you should. In fact it seems like a very bad idea to me.

                                            2. 1

                                              That doesn’t make it not a field; you don’t have to have a division operator at all to be a field, let alone a division operator that is defined to be multiplication by the multiplicative inverse.

                                              1. 1

                                                What is division?

                                                1. 1

                                                  zeebo gave the same answer I would give: a / b is a multiplied by the multiplicative inverse of b when b is not zero. This article is all about how a / 0 is not defined and so, from an engineering perspective, you can define it to be whatever you want without losing the property that your number representation forms a field. You claimed that defining a / 0 = 1 means that your numbers aren’t a field, and all I’m saying is that the definition of the division operator is 100% completely orthogonal to whether or not your numbers form a field, because the definition of a field has nothing to say about division.

                                                  1. 1

                                                    What is an engineering perspective?

                                                    Also, this whole “a field definition doesn’t talk about division” is a bit of misunderstanding of mathematical idioms. The field definition does talk about division since “division” is just shorthand for “multiplicative inverse”. The reason the definition is written the way it is (excluding 0 from having a multiplicative inverse) is that giving zero a multiplicative inverse results in contradictions. When you say “ha! I won’t let that stop me! I’m going to define it anyway!” well, okay, but then either (1) you’re not definining a multiplicative inverse i.e. you’re not defining division or (2) you are defining a multiplicative inverse and you’re creating a contradiction.

                                                    1. 1

                                                      (I had a whole comment here, but zeebo is expressing themselves better than I am, and there’s no point in litigating this twice, especially when I feel like I’m just quoting TFA)

                                                      1. 1

                                                        Me too, I’m tapping out.

                                        1. 4

                                          Unrelated: I read an article recently on Haskell programming that asserted one should never write a parser, and to always use a parser combinator library. Yet on the other end of the spectrum, I see a lot of people claiming you should never use a parser generator as they universally produce awful error messages, and to always write your own. Is this actually a contradiction? Are error messages from parser combinator libraries as bad as the ones from yacc? I’ve never used a parser combinator library as I’ve never needed to do any parsing in Haskell.

                                          Ontopic:

                                          The article here says:

                                          if the BNF production A recognizes a language, and B recognizes a different language, then the production A | B will the union of the two languages recognized by A and B. As a result, swapping the order of an alternative from A | B to B | A will not change the language recognized.

                                          However, most implementations of parser combinators do not satisfy these kinds of properties! Swapping the order of the alternatives usually can change the parse tree returned from a parser built froom parser combinators.

                                          Is it not the case that the property that swapping ‘A | B’ and ‘B | A’ will not change the language recognised and the property that swapping ‘A | B’ and ‘B | A’ will not change the parse tree of parsing a particular string are quite different things? Grammars formally are not instructions for building parse trees, they’re string predicates.

                                          1. 2

                                            It is desirable that the combinator A | B recognises the same language as B | A as it is more declarative. Otherwise this can introduce hard to detect problems. For an implementation of parser combinators this is difficult to guarantee if efficiency is a concern. Yacc has a global view of a grammar and therefore it is easier to guarantee in a Yacc-generated parser than a combinator-based parser because each combinator has typically only a local perspective.

                                            1. 3

                                              The standard haskell parser combinator library parsec does not have commutative disjunction, only readp managed that. Second, the PEG system is bias towards A by choice - and for a reason: This reduces the ambiguity in languages and makes parsing certain aspects of programming languages easier.

                                              1. 1

                                                Second, the PEG system is bias towards A by choice - and for a reason: This reduces the ambiguity in languages and makes parsing certain aspects of programming languages easier.

                                                I understand that pattern matching also has a first-match policy and I don’t complain about this. I am still not convinced it is the right choice for parsing a language that is typically much larger than a runtime value deconstruction. In Parsing: a timeline, Jeffrey Kegler writes about PEG:

                                                But PEG is, in fact, pseudo-declarative – it uses the BNF notation, but it does not parse the BNF grammar described by the notation: PEG achieves unambiguity by finding only a subset of the parses of its BNF grammar. And, as with its predecessor GTDPL, in practice it is usually impossible to determine what the subset is. This means that the best a programmer usually can do is to create a test suite and fiddle with the PEG description until it passes. Problems not covered by the test suite will be encountered at runtime.

                                                Yacc has admittedly its own problems with shift-reduce conflicts.

                                            2. 2

                                              It’s definitely hard to produce good error messages from a parser generator. Especially from a parser combinator library because it’s built up dynamically and there’s no preprocessing stage.

                                              The system in this paper does have a first class ADT representation of grammars though. Which is translated into executable code via staging (similar to my PEG library)

                                              1. 1

                                                I believe your point about swapping A | B for B | A is exactly the problem the authors describe with parser combinators; for most parser generators, the resulting code should behave exactly the same. For any tool, the problem only arises if there’s ambiguity; if all strings are recognized by at most one of A and B, then clearly A | B and B | A will behave the same. Parser combinators have a tough time detecting ambiguity, and so will often just take the first one that matches, so in the case where there’s a set of strings recognized by both A and B, A | B and B | A will be treated differently by most parser combinator libraries. Tools like yacc and bison tend to not tolerate ambiguity, which means for each string, only one of A or B will recognize it, so A | B and B | A will behave the same way.

                                              1. 4

                                                -WliterallyAll would be very appreciated.

                                                1. 2

                                                  The historical reason why -Wall doesn’t enable all warnings is that warnings have been gradually added to the compiler over time. Adding new warnings to existing options could cause builds to fail after upgrading gcc.

                                                  Moreover, some pairs of warnings are incompatible (in the sense that any code accepted by one would be rejected by the other). An example of this is -Wstrict-prototypes and -Wtraditional.

                                                  1. 5

                                                    The historical reason why -Wall doesn’t enable all warnings is that warnings have been gradually added to the compiler over time. Adding new warnings to existing options could cause builds to fail after upgrading gcc.

                                                    I’m aware of that, though I still find it wrong that -Wall doesn’t actually include the new warnings, a build breaking on upgrade with -Wall is in my opinion the more logical outcome. I would rather have flags like -Wall4.9 that would remain constant on upgrades so no one who’s just using that subset of the warnings breaks their build. -Wall can then remain true to its meaning. Seeing that the ship has sailed on that a long time ago, I still would like to have a -WliterallyAll (can be called something else) that would include -Wall -Wextra and others like -Wstrict-overflow.

                                                    Moreover, some pairs of warnings are incompatible (in the sense that any code accepted by one would be rejected by the other). An example of this is -Wstrict-prototypes and -Wtraditional.

                                                    These ones can’t be and don’t have to be included.

                                                    1. 5

                                                      I really like the idea of -Wall-from=$VERSION, and you could even support -Wall-from=latest for people who truly are okay with their builds breaking whenever they upgrade their compiler.

                                                      1. 2

                                                        clang supports -Weverything which I’ve tried, and it happily spews out contradictory warnings (“Padding bytes added to this structure”, “No padding bytes have been added to this packed structure!”) along with (in my opinion) useless warnings (“converting char to int without cast”).

                                                        1. 1

                                                          Yep, -Weverything can be amusing, but it really does throw everything at the code.

                                                        2. 1

                                                          These ones can’t be and don’t have to be included.

                                                          So your -WliterallyAll would not enable literally all warnings either? I’m not sure how that solves the problem.

                                                          I would rather have flags like -Wall4.9 that would remain constant on upgrades so no one who’s just using that subset of the warnings breaks their build. -Wall can then remain true to its meaning.

                                                          Now this is a neat idea that I can get behind.

                                                        3. 1

                                                          Correction: I have been informed that new warnings actually have been added to -Wall on multiple occasions.

                                                          The better explanation for why -Wall leaves many warnings disabled is that many of them are just not useful most of the time. The manual states:

                                                          Note that some warning flags are not implied by -Wall. Some of them warn about constructions that users generally do not consider questionable, but which occasionally you might wish to check for; others warn about constructions that are necessary or hard to avoid in some cases, and there is no simple way to modify the code to suppress the warning.

                                                          In other words, it might be better to think of -Wall not as “all warnings”, but as “all generally useful warnings”.

                                                        4. 1

                                                          Except the -Weffc++ warnings, those are really annoying and are not really about actual problems in your code.

                                                        1. 20

                                                          Kinesis Advantage. I’ve been using them for almost twenty years, and other than some basic remapping, I don’t customize.

                                                          1. 2

                                                            Ditto, I’m at a solid decade. I cannot recommend them enough.

                                                            1. 2

                                                              Also Kinesis Advantage for over a decade. On the hardware side I’ve only mapped ESC to where Caps Lock would be. On the OS side I’ve got a customized version of US Dvorak with scandinavian alphabet.

                                                              I’d like to try a maltron 3d keyboard with integrated trackball mouse. It’s got better function keys too, and a numpad in the middle where there’s nothing except leds on the kinesis.

                                                              1. 2

                                                                Me too. I remap a few keys like the largely useless caps-lock and otherwise I don’t program it at all. It made my wrist pain disappear within a couple weeks of usage though.

                                                                1. 2

                                                                  My only “problem” with the Kinesis, and it’s not even my problem, was that the office complained about the volume of the kicks while I was on a call taking notes.

                                                                  So I switch between the Kinesis and a Apple or Logitech BT keyboard for those occasions.

                                                                  1. 1

                                                                    You can turn the clicks off! I think the combo is Prgm-\

                                                                    1. 2

                                                                      Yeah, its not that click, it’ the other one from the switches :-)

                                                                      I can be a heavy typer and for whatever reason, these keys stand out more than I expected to others behind the microphone.

                                                                  2. 2

                                                                    I prefer the kinesis freestyle2. I like the ability to move the two halves farther apart (broad shoulders) and the tilt has done wonders for my RSI issues.

                                                                    1. 2

                                                                      similar, largely I like that I can put the magic trackpad in between the two halves and have something that feels comparable to using the laptop keyboard. I got rid of my mouse years ago but I’m fairly biased on a trackpad’s potential.

                                                                      I’ve sometimes thought about buying a microsoft folding keyboard and cutting/rewiring it to serve as a portable setup. Have also thought of making a modified version of the nyquist keyboard to be a bit less ‘minimal’ - https://twitter.com/vivekgani/status/939823701804982273

                                                                  1. 14

                                                                    Microsoft lets you download a Windows 10 ISO for free now; I downloaded one yesterday to set up a test environment for something I’m working on. With WSL and articles like this, I thought maybe I could actually consider Windows as an alternative work environment (I’ve been 100% some sort of *nix for decades).

                                                                    Nope. Dear lord, the amount of crapware and shovelware. Why the hell does a fresh install of an operating system have Skype, Candy Crush, OneDrive, ads in the launcher and an annoying voice-assistent who just starts talking out of nowhere?

                                                                    1. 5

                                                                      I’ll give you ads in the launcher – that sucks a big one – but Skype and OneDrive don’t seem like crapware. Mac OS comes with Messages, FaceTime and iCloud; it just so happens that Apple’s implementations of messaging and syncing are better than Microsoft’s. Bundling a messaging program and a file syncing program seems helpful to me, and Skype is (on paper) better than what Apple bundles because you can download it for any platform. It’s a shame that Skype in particular is such an unpleasant application to use.

                                                                      1. 3

                                                                        It’s not even that they’re useful, it’s that they’re not optional. I’m bothered by the preinstalled stuff on Macs too, and the fact that you have to link your online accounts deeply into the OS.

                                                                        I basically am a “window manager and something to intelligently open files by type kinda guy.” Anything more than that I’m not gonna use and thus it bothers me. I’m a minimalist.

                                                                        1. 2

                                                                          I am too, and I uninstall all that stuff immediately; Windows makes it very easy to remove it. “Add or Remove Programs” lets you remove Skype and OneDrive with one click each.

                                                                      2. 2

                                                                        Free?? I guess you can download an ISO but a license for Windows 10 Home edition is $99. The better editions are even more. WSL also doesn’t work on Home either. I think you need Professional or a higher edition.

                                                                        1. 2

                                                                          It works on Home.

                                                                          1. 1

                                                                            Yup. Works great on Home according to this minus Docker which you need Hyper-V support for.

                                                                            https://www.reddit.com/r/bashonubuntuonwindows/comments/7ehjyj/is_wsl_supported_on_windows_10_home/

                                                                        2. 1

                                                                          I always forget about this until I have to rebuild Windows and then I have to go find my scripts to uncrap Windows 10. Now I don’t do anything that could break Windows because I know my scripts are out of date.

                                                                          It’s better since I’ve removed all the garbage, but holy cats that experience is awful.

                                                                        1. 2

                                                                          I ran a separate /usr partition for a while. It’s an utter pain in the arse and I stopped doing it rather quickly. Tbh I only did it because I thought it would enable me to manage my diskspace better. It was also more complicated rather than easier.

                                                                          /home, /pictures and / are the only partitions you need.

                                                                          1. 1

                                                                            /, /home, /etc, /var, /tmp for me (on zfs).

                                                                            /tmp, /var as memory disks.

                                                                            /etc as ro, with possibility for rw separate from /.

                                                                            / as ro.

                                                                            1. 3

                                                                              Why keep /var in memory instead of on-disk? Seems like you want things like webroots and log files to persist between boots.

                                                                              1. 1

                                                                                I usually install webroots under datarootdir. Most of the time I don’t mind purging logfiles at reboot, though memory disks can be backed by a file if wanted/needed, see mdconfig(8). If I want my logs to be persistent, I usually log to a remote machine (with rw /var/log).

                                                                                I like my systems to be as immutable as possible. I still remount when updating &c but the default is ro.

                                                                              2. 2

                                                                                I need /var to persist in case I need any logfiles from previous boots, plus it can hang systemd during shutdown because it wants a place to log to. /etc is secured via some etc-git manager that pushes to my git server. Memory disks don’t count as diskspace or partitions to me.

                                                                                Atleast, that’s my opinion.

                                                                            1. 5

                                                                              For the ‘transfer repo’ use case, you can just tgz the repo (which will include the .git dir and thus the whole history).

                                                                              I didn’t know you could use a bundle to package changesets though - that’s really useful.

                                                                              1. 1

                                                                                There used to be a bunch of explosions if you did that to a repository with submodules, iirc; have those been fixed?

                                                                                1. 2

                                                                                  Sorry, I don’t know. Can you recall anything more about the problem case?

                                                                                  Tarring up a dir in one place and untarring it somewhere else is functionally just a copy (or move if you don’t use the old one) i.e. it’s the same as ‘mv repo new_repo’.

                                                                                  I can’t see how that can break unless it is related to the per-user settings? (e.g. someone using ssh based remotes and the new user not having ssh creds).

                                                                                  So yes - I can see how copying the repo as I described doesn’t remove any remotes (including submodules), which could cause some confusion.

                                                                                  1. 2

                                                                                    Should have done some research on my own before I asked, sorry! Looks like the last time this bit me I was really unlucky; it’s a problem that only affected two minor versions of git.

                                                                              1. 1
                                                                                function f() { return 42; }
                                                                                function g() { return 42 + (Math.random() < 0.001); }
                                                                                

                                                                                any testing-based approach will very likely report that f and g are equivalent

                                                                                1. 3

                                                                                  With a code coverage tool that measures branch coverage you could know that your test didn’t exercise all branches of this code.

                                                                                  So to answer the OP: if you can find a code coverage tool with which you can measure a sufficient coverage metric (e.g. multiple condition decision coverage), you can create a test set that covers the code you intend to replace, in the sure knowledge all cases are covered. Then assert your replacement passes the same test set. It’s not a proof, but perhaps it’s enough for your needs.

                                                                                  1. 2

                                                                                    And if that doesn’t fool it, this almost certainly will:

                                                                                    function f() { return 42; }
                                                                                    function g() { return Math.random() < 0.001 ? 1000 : 42; }
                                                                                    
                                                                                  1. 2

                                                                                    One question I had about Zig that I can’t seem to find an answer to in an admittedly cursory look is what does it have for multithreading / parallel processing? I won’t look at a new language that doesn’t have thread support builtin.

                                                                                    1. 5

                                                                                      this is in process, see https://github.com/ziglang/zig/issues/174 for an overview and links to relevant issues.

                                                                                      Coroutines have already landed on master: https://github.com/ziglang/zig/issues/727

                                                                                      1. 2

                                                                                        I won’t look at a new language that doesn’t have thread support builtin.

                                                                                        Funnily enough, I’d say that I wouldn’t get excited at a new language that does have thread support builtin.

                                                                                        My reasoning is that the operating system should be enough for scheduling.

                                                                                        Now, I know I’m probably biased by my work on Jehanne’s kernel and the Plan 9 style, so I’m sincerely curious about your opinion.

                                                                                        Why do you want more schedulers to integrate instead of using just the OS one?

                                                                                        1. 1

                                                                                          Threads are OS-scheduled too; they’re a kernel-provided parallelism API on both POSIXey systems and Windows. Maybe you’re thinking of green threads or threadlets or whatever?

                                                                                          1. 1

                                                                                            Maybe you’re thinking of green threads…

                                                                                            I thought @jdarnold was talking about green threads, coroutines, and other similar techniques that are usually provided by language specific virtual machines.

                                                                                            Pthreads are not language specific: they are a C api that any language could wrap, but not something that requires particular support from the language.

                                                                                          2. 1

                                                                                            Because threads are necessary for modern programming. If you want to take advantage of processor and OS level threading, how can you do it if the language doesn’t have some way of taking advantage of it? I’ve spent far too much time trying to figure out all the various ugly threading problems in other languages and I think the language should “just do it” for me.

                                                                                        1. 3

                                                                                          The things I use that haven’t been mentioned elsewhere:

                                                                                          • klaus for git http frontend (it’s the simplest one I could find, and I think it looks nice and tidy)
                                                                                          • umurmur for voice chat with friends (it’s a lighter-weight reimplementation of murmur, which is the server for the mumble client)
                                                                                          1. 4

                                                                                            It’s only a single SELECT statement if you don’t count the SELECT statements used in the CTEs (common table expressions) and the SELECT statements used for subqueries in the WHERE clause. It’s a cool implementation despite the misleading title.

                                                                                            1. 11

                                                                                              Eh; it is, to the letter of the law, a single SQL statement. I think it’s impressive, and have no qualms calling it a single statement.

                                                                                            1. 2

                                                                                              There was some super interesting related work to this at SIGGRAPH a couple years ago as well:

                                                                                              http://web.engr.oregonstate.edu/~mjb/cs550/Projects/Papers/CSemanticShapeEditing.pdf

                                                                                              It seems like the general state of semantic editing is that it works in some very specific conditions but when those conditions are met it works unbelievably well.

                                                                                              1. 2

                                                                                                I don’t understand, isn’t this just modifying your own hardware? Why is this treated like some great tragedy?

                                                                                                1. 1

                                                                                                  Because this lets you exploit anything using NVIDIA’s Tegra and that includes things like Tesla vehicles.

                                                                                                  It’s super cool that you can mod your switch/tesla now.. but also super not cool that you can’t prevent someone else from moding it for you.

                                                                                                  1. 2

                                                                                                    Wait, so it’s a remote sploit? Or you mean if you give your Tesla to your mechanic they can mod it? Or something else?

                                                                                                  2. 1

                                                                                                    Presumably because it makes for better headlines? Local code execution on a game console seems interesting only if you figure out something better to do with your game console than playing games.