1. 2

    Onboarding!

    1. 10

      Are they finally going to fix the abomination that is C11 atomics? As far as I can tell, WG14 copied atomics from WG21 without understanding them and ended up with a mess that causes problems for both C and C++.

      In C++11 atomics, std::atomic<T> is a new, distinct type. An implementation is required to provide a hardware-enforced (or, in the worst case, OS-enforced) atomic boolean. If the hardware supports a richer set of atomics, then it can be used directly, but a std::atomic<T> implementation can always fall back to using std::atomic_flag to implement a spinlock that guards access to larger types. This means that std::atomic<T> can be defined for all types and be reasonably efficient (if you have a futex-like primitive then, in the uncontended case it’s almost as fast as T and in the contended state it doesn’t consume much CPU time or power spinning).

      Then WG14 came along and wanted to define _Atomic(T) to be compatible with std::atomic<T>. That would require the C compiler and C++ standard library to agree on data layout and locking policy for things larger than the hardware-supported atomic size, but it’s still feasible. Then they completely screwed up by making all of the arguments to the functions declared in stdatomic.h take a volatile T* instead of an _Atomic(T)*. For historical reasons, the representation of volatile T and T have to be the same, which means that _Atomic(T) and T must have the same representation and there is nowhere that you can stash a lock. The desire to make _Atomic(T) and std::atomic<T> interchangeable means that C++ implementers are stuck with this.

      Large atomics are now implemented by calls to a library but there is no way to implement this in a way that is both fast and correct, so everyone picks fast. The atomics library provides a pool of locks and acquires one keyed on the address. That’s fine, except that most modern operating systems allow virtual addresses to be aliased and so there are situations (particularly in multi-process situations, but also when you have a GC or similar doing exciting virtual memory tricks) where simple operations _Atomic(T) are not atomic. Fixing that would requiring asking the OS if a particular page is aliased before performing an operation (and preventing it from becoming aliased during the operation), at which point you may as well just move atomic operations into the kernel anyway, because you’re paying system call for each one.

      C++20 has worked around this by defining std::atomic_ref, which provides the option of storing the lock out-of-line with the object, at the expense of punting the determination of the sharing set for an object to the programmer.

      Oh, and let’s not forget the mtx_timedlock fiasco. Ignoring decades of experience in API design, WG14 decided to make the timeout for a mutex the wall-clock time, not the monotonic clock. As a result, it is impossible to write correct code using C11’s mutexes because the wall-clock time may move arbitrarily. You can wait on a mutex with a 1ms timeout and discover that the clock was wrong and after it was reset in the middle of your ‘get time, add 1ms, timedwait’ sequence, you’re now waiting a year (more likely, you’re waiting multiple seconds and now the tail latency of your distributed system has weird spikes). The C++ version of this API gets it right and allows you to specify the clock to use, pthread_mutex_timedlock got it wrong and ended up with platform-specific work-arounds. Even pthreads got it right for condition variables, C11 predictable got it wrong.

      C is completely inappropriate as a systems programming language for modern hardware. All of these tweaks are nice cleanups but they’re missing the fundamental issues.

      1. 3

        Then they completely screwed up by making all of the arguments to the functions declared in stdatomic.h take a volatile T* instead of an _Atomic(T)*. For historical reasons, the representation of volatile T and T have to be the same, which means that _Atomic(T) and T must have the same representation and there is nowhere that you can stash a lock.

        I’m not too familiar with atomics and their implementation details, but my reading of the standard is that the functions in stdatomic.h take a volatile _Atomic(T) * (i.e. a pointer to volatile-qualified atomic type).

        They are described with the syntax volatile A *object, and earlier on in the stdatomic.h introduction it says “In the following synopses: An A refers to one of the atomic types”.

        Maybe I’m missing something?

        1. 2

          Huh, it looks as if you’re right. That’s how I read the standard in 2011 when I added the atomics builtins to clang, but I reread it later and thought that I’d initially misunderstood. It looks as if I get to blame GCC for the current mess then (their atomic builtins don’t require _Atomic-qualified types and their stdatomic.h doesn’t check it).

          Sorry WG14, you didn’t get atomics wrong, you just got mutexes and condition variables wrong.

          That said, I’ve no idea why they felt the need to make the arguments to these functions volatile and _Atomic. I am not sure what a volatile _Atomic(T)* actually means. Presumably the compiler is not allowed to elide the load or store even if it can prove that no other thread can see it?

          1. 1

            I’ve no idea why they felt the need to make the arguments to these functions volatile and _Atomic

            I’ve no idea; but a guess: they want to preserve the volatility of arguments to atomic_*. That is, it should be possible to perform operations on variables of volatile type without losing the ‘volatile’. I will note that the c++ atomics contain one overload with volatile and one without. But if that’s the case, why the committee felt they could get away with being polymorphic wrt type, but not with being polymorphic wrt volatility is beyond me.

            There is this stackoverflow answer from a committee member, but I did not find it at all illuminating.

            not allowed to elide the load or store even if it can prove that no other thread can see it?

            That would be silly; a big part of the impetus for atomics was to allow the compiler to optimize in ways that it couldn’t using just volatile + intrinsics. Dead loads should definitely be discarded, even if atomic!


            One thing that is clear from this exchange: there is a massive rift between specifiers, implementors, and users. Thankfully the current spec editor (JeanHeyd Meneide, also the author of the linked post) seems to be aware of this and to be acting to improve the situation; so we will see what (if anything) changes.

            1. 3

              One thing that is clear from this exchange: there is a massive rift between specifiers, implementors, and users. Thankfully the current spec editor (JeanHeyd Meneide, also the author of the linked post) seems to be aware of this and to be acting to improve the situation; so we will see what (if anything) changes.

              It’s not really clear to me how many implementers are left that care:

              • MSVC is a C++ compiler that has a C mode. The authors write in C++ and care a lot about C++.
              • Clang is a C++ compiler that has C and Objective-C[++] modes. The authors write in C++ and care a lot about C++.
              • GCC includes C and C++ compilers with separate front ends, it’s primarily C so historically the authors have cared a lot about C, but for new code it’s moving to C++ and so the authors increasingly care about C++.

              That leaves things like PCC, TCC, an so on, and a few surviving 16-bit microcontroller toolchains, as the only C implementations that are not C++ with C as an afterthought.

              I honestly have no idea why someone would choose to write C rather than C++ these days. You end up writing more code, you have a higher cognitive load just to get things like ownership right (even if you use nothing from C++ other than smart pointers, your live is significantly better than that of a C programmer), you don’t get generic data structures, and you don’t even get more efficient code because the compilers are all written in C++ and so care about C++ optimisation because it directly affects the compiler writers.

              C++ is not seeing its market eroded by C but by things like Rust and Zig (and, increasingly, Python and JavaScript, since computers are fast now). C fits in a niche that doesn’t really exist anymore.

              1. 2

                I honestly have no idea why someone would choose to write C rather than C++ these days.

                For applications, perhaps, but for libraries and support code, ABI stability and ease of integration with the outside world are big ones. It’s also a much less volatile language in ways that start to really matter if you are deploying code across a wide range of systems, especially if old and/or embedded ones are included.

                Avoiding C++ (and especially bleeding edge revisions of it) avoids a lot of real life problems, risks, and hassles. You lose out on a lot of power, of course, but for some projects the kind of power that C++ offers isn’t terribly important, but the ability to easily run on systems 20 years old or 20 years into the future might be. There’s definitely a sort of irony in C being the real “write once, run anywhere” victor, but… in many ways it is.

                C fits in a niche that doesn’t really exist anymore.

                It might not exist in the realm of trendy programming language debates on the Internet, but we’re having this conversation on systems largely implemented in it (UNIX won after all), so I think it’s safe to say that it very much exists, and will continue to for a long time. That niche is just mostly occupied by people who don’t tend to participate in programming language debates. One of the niche’s best features is being largely insulated from all of that noise, after all.

                It’s a very conservative niche in a way, but sometimes that’s appropriate. Hell, in the absolute worst case scenario, you could write your own compiler if you really needed to. That’s of course nuts, but it is possible, which is reassuring compared to languages like C++ and Rust where it isn’t. More realistically, diversity of implementation is just a good indicator of the “security” of a language “investment”. Those implementations you mention might be nichey, but they exist, and you could pretty easily use them (or adapt them) if you wanted to. This is a good thing. Frankly I don’t imagine any new language will ever manage to actually replace C unless it pulls the same thing off. Simplicity matters in the end, just in very indirect ways…

                1. 4

                  For applications, perhaps, but for libraries and support code, ABI stability and ease of integration with the outside world are big ones. It’s also a much less volatile language in ways that start to really matter if you are deploying code across a wide range of systems, especially if old and/or embedded ones are included.

                  I’d definitely have agreed with you 10 years ago, but the C++ ABI has been stable and backwards compatible on all *NIX systems, and fairly stable on Windows, for over 15 years. C++ provides you with some tools that allow you to make unstable ABIs for your libraries, but it also provides tools for avoiding these problems. The same problems exist in C: you can’t add a field to a C structure without breaking the ABI, just as you can’t add a field to a C++ class without breaking the ABI.

                  I should point out that most of the things that I work on these days are low-level libraries and C++17 is the default tool for all of these.

                  You lose out on a lot of power, of course, but for some projects the kind of power that C++ offers isn’t terribly important, but the ability to easily run on systems 20 years old or 20 years into the future might be.

                  Neither C nor C++ guarantees this, in my experience old C code needs just as much updating as C++ code, and it’s often harder to do because C code does not encourage clean abstractions. This is particularly true when talking about running on new platforms. From my personal experience, we and another group have recently written memory allocators. Ours is written in C++, theirs in C. This is what our platform and architecture abstractions look like. They’re clean, small, and self-contained. Theirs? Not so much. We’ve ported ours to CHERI, where the hardware enforces strict pointers and bounds enforcement on pointers with quite a small set of changes, made possible (and maintainable when most of our targets don’t have CHERI support) by the fact that C++ lets us define pointer wrapper types that describe high-level semantics of the associated pointer and a state machine for which transitions are permitted, porting theirs would require invasive changes.

                  It might not exist in the realm of trendy programming language debates on the Internet, but we’re having this conversation on systems largely implemented in it (UNIX won after all), so I think it’s safe to say that it very much exists, and will continue to for a long time.

                  I’m writing this on a Windows system, where much of the kernel and most of the userland is C++. I also post from my Mac, where the kernel is a mix of C and C++, with more C++ being added over time, and the userland is C for the old bits, C++ for the low-level new bits, and Objective-C / Swift for the high-level new bits. The only places either of these systems chose C were parts that were written before C++11 was standardised.

                  Hell, in the absolute worst case scenario, you could write your own compiler if you really needed to.

                  This is true for ISO C. In my experienced (based in part on building a new architecture designed to run C code in a memory-safe environment and working on defining a formal model of the de-facto C standard), there is almost no C code that is actually ISO C. The language is so limited that anything nontrivial ends up using vendor extensions. ‘Portable’ C code uses a load of #ifdefs so that it can use two or more different vendor extensions. There’s a lot of GNU C in the world, for example.

                  Reimplementing GNU C is definitely possible (clang, ICC, and XLC all did it, with varying levels of success) but it’s hard, to the extent that of these three none actually achieve 100% compatibility to the degree that they can compile, for example, all of the C code in the FreeBSD ports tree out of the box. They actually have better compatibility with C++ codebases, especially post-C++11 codebases (most of the C++ codebases that don’t work are ones that are doing things so far outside the standard that they have things like ‘works with G++ 4.3 but not 4.2 or 4.4’ in their build instructions).

                  More realistically, diversity of implementation is just a good indicator of the “security” of a language “investment”. Those implementations you mention might be nichey, but they exist, and you could pretty easily use them (or adapt them) if you wanted to.

                  There are a few niche C compilers (e.g. PCC / TCC), but almost all of the mainstream C compilers (MSVC, GCC, Clang, XLC, ICC) are C++ compilers that also have a C mode. Most of them are either written in C++ or are being gradually rewritten in C++. Most of the effort in ‘C’ compiler is focused on improving C++ support and performance.

                  By 2018, C++17 was pretty much universally supported by C++ compilers. We waited until 2019 to move to C++17 for a few stragglers, we’re now pretty confident being able to move to C++20. The days when a new standard took 5+ years to support are long gone for C++. Even a decade ago, C++11 got full support across the board before C11.

                  If you want to guarantee good long-term support, look at what the people who maintain your compiler are investing in. For C compilers, the folks that maintain them are investing heavily in C++ and in C as an afterthought.

                  1. 3

                    I’d definitely have agreed with you 10 years ago, but the C++ ABI has been stable and backwards compatible on all *NIX systems, and fairly stable on Windows, for over 15 years. C++ provides you with some tools that allow you to make unstable ABIs for your libraries, but it also provides tools for avoiding these problems. The same problems exist in C: you can’t add a field to a C structure without breaking the ABI, just as you can’t add a field to a C++ class without breaking the ABI.

                    The C++ ABI is stable now, but the problem is binding it from other languages (i.e. try binding a mangled symbol), because C is the lowest common denominator on Unix. Of course, with C++, you can just define a C-level ABI and just use C++ for everything.

                    edit

                    Reimplementing GNU C is definitely possible (clang, ICC, and XLC all did it, with varying levels of success) but it’s hard, to the extent that of these three none actually achieve 100% compatibility to the degree that they can compile, for example, all of the C code in the FreeBSD ports tree out of the box. They actually have better compatibility with C++ codebases, especially post-C++11 codebases (most of the C++ codebases that don’t work are ones that are doing things so far outside the standard that they have things like ‘works with G++ 4.3 but not 4.2 or 4.4’ in their build instructions).

                    It’s funny no one ever complains about GNU’s extensions to C being so prevalent that it makes implementing other C compilers hard, yet loses their minds over say, a Microsoft extension.

                    1. 2

                      The C++ ABI is stable now, but the problem is binding it from other languages (i.e. try binding a mangled symbol), because C is the lowest common denominator on Unix. Of course, with C++, you can just define a C-level ABI and just use C++ for everything.

                      That depends a lot on what you’re binding. If you’re using SWIG or similar, then having a C++ API can be better because it can wrap C++ types and get things like memory management for free if you’ve used smart pointers at the boundaries. The binding generator doesn’t care about name mangling because it’s just producing a C++ file.

                      If you’re binding to Lua, then you can use Sol2 and directly surface C++ types into Lua without any external support. With something like Sol2 in C++, you write C++ classes and then just expose them directly from within C++ code, using compile-time reflection. There are similar things for other languages.

                      If you’re trying to import C code into a vaguely object-oriented scripting language then you need to implement an object model in C and then write code that translates from your ad-hoc language into the scripting language’s one. You have to explicitly write all memory-management things in the bindings, because they’re API contracts in C but part of the type system in C++.

                      From my personal experience, binding modern C++ to a high-level language is fairly easy (though not quite free) if you have a well-designed API, binding Objective-C (which has rich run-time reflection) is trivial to the extent that you can write completely generic bridges, and binding C is possible but requires writing bridge code that is specific to the API for anything non-trivial.

                      1. 1

                        Right; I suspect it’s actually better with a binding generator or environments where you have to write native binding code (i.e. JNI/PHP). It’s just annoying for the ad-hoc cases (i.e. .NET P/Invoke).

                        1. 2

                          On the other hand, if you’re targeting .NET on Windows then you can expose COM objects directly to .NET code without any bridging code and you can generate COM objects directly from C++ classes with a little bit of template goo.

        2. 2

          Looks like Hans Boehm is working on it, as mentioned in the bottom of the article. They are apparently “bringing it back up to parity with C++” which should fix the problems you mentioned.

          1. 4

            That link is just Hans adding a <cstdatomic> to C++ that adds a #define _Atomic(T) std::atomic<T>. This ‘fixes’ the problem by letting you build C code as C++, it doesn’t fix the fact that C is fundamentally broken and can’t be fixed without breaking backwards source and binary compatibility.

        1. 1

          Probably cleaning up my factory in Satisfactory, and getting to see my girlfriend for the first time in 2 weeks.

          1. 2

            Working and looking for a new job. Already have a couple of leads headed in the right direction, with one I’m quite interested in. It’s about time for a change of pace and to see if I can get a culture that better matches my style of work.

            (Note: it’s been a while since we’ve seen a “Who’s hiring?” post. I’d be very interested to work for some fellow crustaceans!)

            1. 1

              Last one: https://lobste.rs/s/wigiwx/who_s_hiring_q2y2020

              Ours is still open :)

            1. 3

              Apart from $work, i am going to continue reading Crafting Interpreters already half way through and am really enjoying it.

              1. 1

                It’s an excellent book. I’ve already cribbed his ideas with using stack-allocated values vs. having all values be allocated on the heap. Are you following it exactly and making a version of Lox?

                1. 2

                  I am not implementing the ast based interpreter since I’ve built a quite a few of those already. Learning the byte code vm instead.

              1. 1

                At $WORK, I’ve been having a tough time acclimating to working from home. I’m pretty likely to get distracted if there’s not a person right there all the time. Otherwise, I’m working on the least novel of all projects: a scheme interpreter. I’ve just now got GC working well enough, and my next steps are to write some more aggressive test cases to really exercise everything.

                1. 2

                  Thanks for looking out for this community! I don’t post or comment very often. Do you see that becoming suspicious behavior?

                  1. 5

                    Yeah I recognize a bit more of myself in that than I would like. I’m in my 7th-ish year of university for computer science because I dropped out one semester to get a job because the classes seemed pretty pointless and I wanted to move off campus and to do that, one needed money. Fortunately, I’m working for the university (and ended up with a job as a sysadmin which is a pretty good gig) and i get to keep taking classes (slowly) so i’m actually getting closer to graduating.

                    1. 4

                      It sounds like you’re in a good place. I was in a similar position, but I recently graduated. You can do it, and it’s not a race, so taking a few classes at a time is a totally valid strategy (and the one I used to finish my degree).

                      The kind of behavior he describes is a textbook match for an executive dysfunction such as ADHD. I was diagnosed with it when I was very young. The disorder is unfortunately named: it’s a problem with the brain generating intent to complete actions or seeing the rewards of doing them. A form of time-nearsightedness, if you will. Dr. Russell Barkley has an excellent lecture that explains it in a bit more depth.

                      If this does sound like you, I would recommend speaking to your physician about it. Medication for ADHD is one of the most effective treatments in all of psychology, and for me, it’s a lifesaver.

                    1. 6

                      It took me a while to notice that the GNU project has two Scheme implementations, MIT/GNU Scheme and Guile. Is there any non-historical reason for it? Is one faster while the other (presumably Guile) has a better C-API or is internally more extensible?

                      1. 6

                        I don’t know why MIT Scheme was made official GNU (probably because Stallman studied at MIT and maybe even worked in the implementation), but as far as I know the idea behind Guile was that it was intended as the official scripting and extension language for the GNU system (GUILE stands for “GNU Ubiquitous Intelligent Language for Extensions”).

                        MIT Scheme used to be much more heavyweight full-fledged Scheme implementation. Nowadays Guile has grown to be quite large (and there are lots of packages for it) so that it can be used separately for complete programs too, so the distinction is blurred.

                        1. 3

                          Allegedly, MIT Scheme was around before the FSF or the GPL existed and was also licensed under a copyleft license. If true, it would be a good bet that MIT Scheme inspired the GPL and was subsequently adopted by the FSF.

                          1. 3

                            The changelog contains git commits (probably converted from some other VCS) that date back to 1986. Kind of funny to see :)

                        2. 2

                          I believe you’re right. MIT Scheme is a real-life native code compiler, and correspondingly has better performance; Guile is designed for embedding and extension, and has a better C API.

                          1. 1

                            GNU has a third Scheme implementation that targets the JVM. I was reminded to post this because I saw that GNU also has two Common Lisp implementations: GNU Common Lisp and CLISP.

                          1. 3

                            Polishing up a simple parser for SCL, an Algol contemporary, for my Languages course. After that, maybe starting on a Scheme in SML.

                            1. 12

                              I think that the author misses the point of having those command line tools available. It’s all about gradual development. Unix was designed for interchangeable parts that work together, making it as easy as possible to leverage work that others have done and replacing parts if they fail to work correctly, all the way up to the process level.

                              This kind of interoperability has proven very difficult to accomplish pervasively with GUI applications. You have to standardize on some interface to exchange data, and not only that, create an intuitive method to compose programs together. Unifying the “small, composable tools” and “graphical interface” paradigms would require a drastic change in the way that current GUI applications work, so much so that it would likely break the dominant WIMP paradigm.

                              The author was right: it’s easier to work with text, since your CLI applications already do it and that it doesn’t take extreme effort to try to use the applications in a different method than what the author thought of using it for. I think the best example of this has been Apple’s Automator. Sure, you can graphically script your Mac, but you only get the features that the application authors thought to give you.

                              1. 4

                                I might just add that Microsoft (with OLE), Apple (with MacOS classic, as well as AppleScript and Automator), Google (Android fragments, and now again with “stripes”) and many, many others have attempted to make “interchangeable” UI components. I think the closest to that today is something like React, where you really can just import a UI component.

                                That being said, text isn’t exactly “simple.” For one, just text encoding can be tricky. On top of that, applications generally care about the structure of whatever text input they get, whether it be JSON or something else. That means parsers everywhere, which are themselves very complex. I think the PowerShell approach of communicating through objects is worthwhile, since at least in principle behavior can be bundled with data directly.

                                1. 4

                                  This kind of interoperability has proven very difficult to accomplish pervasively with GUI applications. You have to standardize on some interface to exchange data, and not only that, create an intuitive method to compose programs together.

                                  If I understand you correctly, this is something Smalltalk “got right” nearly 40 years ago. All the GUI stuff can be seamlessly reused and composed. No text files needed. It’s actually very simple.

                                  Unifying the “small, composable tools” and “graphical interface” paradigms would require a drastic change in the way that current GUI applications work, so much so that it would likely break the dominant WIMP paradigm.

                                  Maybe, maybe not. The paradigm it clearly breaks is the one where standalone applications are the basic units of commercial value. Some of us may remember OpenDoc

                                  1. 1

                                    Wow. I wonder if there are still interesting structures in this version like a glider. Hmmm.

                                    1. 2

                                      Yes, you can see the equivalent of a glider in the gif. It’s a lopsided donut.

                                    1. 42

                                      I don’t understand the author’s objection to Outreachy. As far as I can tell, they want to fund some interns from marginalized groups so that they can work on open-source. They are not preventing the author from working on open-source. They are not preventing the author from funding interns he approves of from working on open-source. What is the problem?

                                      1. 24

                                        Outreachy funds members of specific minority groups and would not fund a cisgender white guy’s internship. He decries this as discrimination.

                                        On this topic, the term discrimination has differing interpretations and it’s very easy for folks to talk past each other when it comes up. It sounds he’s using it in a way that means disfavoring people based on the sex or race they belong to. Another popular definition is that it only applies to actions taken against groups that have been historically discriminated against. This use gets really strong pushback from people who disagree with the aims or means of projects like Outreachy as begging the question, making an assumption that precludes meaningful discussion of related issues.

                                        1. 4

                                          It’s not only that Outreachy would not fund a cisgender white guy’s internship. Outreachy also would not fund Asian minority’s internship. Asian minority is a group that has been historically discriminated against. Outreachy is discriminating against specific minority. In summary, Outreachy is simply discriminating, it is not using alternative definition of discrimination.

                                          (Might be relevant: I am Asian.)

                                          1. 7

                                            I asked Karen Sandler. This is the reason for the selection of groups:

                                            <karenesq> JordiGH: I saw the lobsters thread. the expansion within the US to the non-gender related criteria was based on the publication by multiple tech companies of their own diversity statistics. We just expanded our criteria to the groups who were by far the least represented.

                                            1. 3

                                              Thanks a lot for clarifying this with Karen Sandler!

                                              I think this proves beyond any shade of doubt that Outreachy is concerned with not historical injustice, but present disparity.

                                            2. 3

                                              He had a pretty fair description of where the disputes were coming from. Far as what you’re saying on Outreachy, the Asian part still fits into it as even cultural diversity classes I’ve seen say the stereotypes around Asians are positive for stuff like being smart or educated. Overly positive to the point that suicide due to pressure to achieve was a bit higher according to those sources. There’s lots of Asians brought into tech sector due to a mix of stereotypes and H1-B. The commonness of white males and Asians in software development might be why they were excluded with the white males. That makes sense to me if I look at it through the view they likely have of who is privileged in tech.

                                              1. 4

                                                Yes, it makes sense that way, but it does not make sense in “historical discrimination” sense pushcx argued. I believe this is an evidence that these organizations are concerned with the present disparity, not with the history. Therefore, I believe they should cease to (dishonestly, I think) argue history argument.

                                              2. 2

                                                Well, if you were a woman or identified as one they would accept you, regardless if you were Asian or not. I do wonder why they picked to outreach to the particular groups they picked.

                                                And you have to pick some groups. If you pick none/all, then you’re not doing anything different than GSoC, and there already is a GSoC, so there would be no point for Outreachy.

                                                1. 1

                                                  You can pick groups that have been historically discriminated against, as pushcx suggested. Outreachy chose otherwise.

                                                  1. 3

                                                    To nitpick, I was talking about the term “discrimination” because I’ve seen it as a source of people talking past each other, not advocating for an action or even a particular definition of the term. Advocating my politics would’ve compromised my ability to effectively moderate, though incorrect assumptions were still made about the politics of the post I removed and that I did so out of disagreement, so… shrug

                                            3. 51

                                              For those who are used to privilege, equality feels like discrimination.

                                              1. 19

                                                I think the author’s point is that offering an internship for only specific groups is discrimination. From a certain point of view, I understand how people see it that way. I also understand how it’s seen as fair. Whether that’s really discrimination or not is up for debate.

                                                What’s not up for debate is that companies or people should be able to give their money however they feel like it. It’s their money. If a company wants to only give their money to Black Africans from Phuthaditjhaba, that’s their choice! Fine by me!

                                                Edit: trying to make it clear I don’t want to debate, but make the money point.

                                                1. 20

                                                  It is discrimination, that’s what discrimination means. But that doesn’t automatically make it unfair or net wrong.

                                                  1. 13

                                                    The alternative is inclusive supply plus random selection. You identify the various groups that exist. Go out of your way to bring in potential candidates of a certain number in each one. The selection process is blind. Whoever is selected gets the help. Maybe auditable process on top of that. This is a fair process that boosts minorities on average to whatever ratio you’re doing the invite. It helps whites and males, too.

                                                    That’s the kind of thing I push. Plus, different ways to improve the blindness of the evaluation processes. That is worth a lot of research given how much politics factors into performance evaluations in workplaces. It affects everyone but minority members even more per the data. Those methods, an equal pull among various categories, and blind select are about as fair as it gets. Although I don’t know exact methods, I did see GapJumpers describing something that sounds closer to this with positive results. So, the less-discriminating way of correcting imbalances still achieves that goal. The others aren’t strictly necessary.

                                                    The next scenario is specific categories getting pulled in more than everyone with organizations helping people in the other ones exclusively to boost them. That’s what’s going on here. Given the circumstances, I’m not going to knock them even if not as fair as other method. They’re still helping. It looks less discriminatory if one views it at a high level where each group addresses those they’re biased for. I did want to show the alternative since it rarely gets mentioned, though.

                                                    1. 14

                                                      I really agree with this. I was with a company who did a teenage code academy. I have a masters, and did a lot of work tutoring undergrads and really want to get back into teaching/academia.

                                                      I wanted to teach, but was actually pushed down the list because they wanted to give teaching positions to female staff first. I was told I could take a support role. The company also did a lot of promotion specifically to all girls schools and to try to pull women in. They had males in the classes too, but the promotion was pretty bias.

                                                      Also I want to point out that I had a stronger teaching background/qualifications than some of the other people put in those positions.

                                                      I’m for fairness and giving people opportunity, but I feel as if efforts to stop discrimination just lead to more discrimination. The thing is, we’re scientists and engineers. We know the maths. We can come up with better ways to pull in good random distributions of minorities/non-minorities and don’t have to resort to workshops that promote just another equal but opposite mono-culture. If anything you do potential developers a disservice by having workshops that are only women instead of half-and-half. You get a really one sided narrative.

                                                      1. 10

                                                        I appreciate you sharing that example. It mirrors some that have happened to me. Your case is a good example of sexism against a man that might be more qualified than a women being hired based on gender. I’ll also note that so-called “token hires” are often treated poorly once they get in. I’ve seen small organizations where that’s not true since the leadership just really believed in being good to people and bringing in different folks. They’re rare. Most seem to be environments people won’t want to be in since conflict or resentment increases.

                                                        In your case and most of those, random + blind selection might have solved the problem over time without further discrimination or resentment. If process is auditable, everyone knows the race or gender part gave everyone a fair shot. From there, it was performance. That’s a meaningful improvement to me in reducing the negative effects that can kick in when correcting imbalances. What I will say, though, is I don’t think we can always do this since performance in some jobs is highly face-to-face, based on how groups perceive the performer, etc. I’m still uncertain if something other than quotas can help with those.

                                                        Most jobs I see people apply for can be measured, though. If it can be measured, it can sometimes already be blinded or may be measured blindly if we develop techniques for that.

                                                        1. 4

                                                          I agree with these comments, plus, thanks for sharing a real life example. We are definitely fighting discrimination with more discrimination doing things the current way. For a bit I’ve thought that a blind evaluation process would be best. It may not be perfect, but it seems like a step in a better direction. It’s encouraging to see other people talking about it.

                                                          One other thought- I think we as society are handling race, gender, age, etc problems wrong. Often, it’s how a certain group ‘A’ has persecuted another group ‘B’. However, this isn’t really fair for the people in group ‘A’ that having nothing to do with what the other people are doing. Because they share the same gender/race/whatever, they are lumped in. Part of this seems to be human nature, and it’s not always wrong. But maybe fighting these battles in more specific cases would help.

                                                        2. 6

                                                          I think the problem here is that whites and males don’t need extra help. They already get enough help from their position in society. Sure, equal distribution sounds great, but adding an equal amount to everyone doesn’t make them equal; it doesn’t nullify the discrepancy that was there before. Is it good to do so? Yes, of course, but it would be better served and better for society to focus on helping those without built-in privilege to counteract the advantage that white males have.

                                                          1. 10

                                                            There are lots of people in bad situations who are white and male. Saying someones race and gender determines how much help someone has had in life seems both racist and sexist.

                                                            1. 3

                                                              I’m not saying that it applies in all circumstances. But I am saying that they have a much larger support structure available to them, even if they didn’t get started on the same footing as other examples.

                                                              It’s not directly because of their race and sex, it’s because of their privilege. That’s the fundamental difference.

                                                              1. 7

                                                                I don’t even know how much it matters if it was true. Especially in rural or poor areas of white people. Their support structure is usually some close friends, family, people they live with, and so on. Often food stamps, too. Their transportation or Internet might be unreliable. Few jobs close to them. They have to pack up and leave putting themselves or their family into the unknown with about no money to save for both the move and higher cost of living many areas with more jobs will entail. Lots of drug abuse and suicide among these groups relative to whites in general. Most just hope they get a decent job where management isn’t too abusive and the lowish wages cover the bills. Then, you talk about how they have “a much larger support structure available to them” “because of their privilege.” They’d just stare at you blinking wondering what you’re talking about.

                                                                Put Your Solutions Where Your Ideology Is

                                                                Since you talk about advantages of privilege and support structures, I’m curious what you’d recommend to a few laypeople in my white family who will work, have basic to good people skills, and are non-technical. They each have a job in area where there aren’t lots of good jobs. They make enough money to make rent. I often have trouble contacting them because they “have no minutes” on their phones. The areas they’re in have no wired Internet directly to renters (i.e. pay extra for crap), satellite, spotty connections, or they can’t afford it. Some have transportation, others lost theirs as it died with four digit repairs eclipsing 1-2 digits of surplus money. All their bosses exploit them to whatever extent possible. All the bosses underschedule them where the work couldn’t get done then try to work them to death to do it. The schedules they demand are horrible with at least two of us having schedules that shift anywhere from morning to evening to graveyard shift in mid-week. It kills people slowly over time. Meanwhile, mentally drains them in a way that prevents them learning deep stuff that could get them in good jobs. Most of them and their friends feel like zombies due to scheduling with them just watching TV, chilling with friends/family, or something otherwise comfortable on off days. This is more prevalent as companies like Khronos push their optimizations into big businesses with smaller ones following suit. Although not among current family now, many of them in the past worked 2-3 jobs with about no time to sleep or have fun just to survive. Gets worse when they have an infant or kids.

                                                                This is the kind of stuff common among poor and working classes throughout America, including white people. Is this the average situation of you, your friends, and/or most white males or females you know of? These people “don’t need help?” I’m stretching my brain to try to figure out how what you’re saying fits their situation. In my view, they don’t have help so much as an endless supply of obstacles ranging from not affording bills to their evil bosses whose references they may depend on to police or government punishing them with utility bill-sized tickets for being poor. What is your specific recommendation for white people without any surplus of money, spotty Internet, unreliable transportation, and heavily-disrupted sleep?

                                                                Think quickly, too, because white people in these situations aren’t allowed much time to think between their stressful jobs (often multiple) and families to attend to. Gotta come up with solutions about on instinct. Just take the few minutes of clarity a poor, white person might have to solve a problem while in the bathroom or waiting in line at a store. It’s gotta work with almost no thought, energy, savings, or credit score. What you got? I’ll pass it on to see if they think it’s hopeful or contributes to the entertainment for the day. Hope and entertainment is about the most I can give to the person I’m visiting Saturday since their “privilege” hasn’t brought them much of anything else.

                                                                1. 3

                                                                  Your comment is a great illustration of the danger of generalizing things on the basis of racis or gender, mistakenly classifying a lot of people as “privileged”. Ideally, the goal of a charity should be to help unprivileged people in general, for whatever reason they are unprivileged, not because of their race or gender.

                                                                  1. 2

                                                                    I’m not saying that it’s applicable in every situation; I am specifically talking about the tech industry. I don’t think it’s about prejudice in this case. I think it’s about fixing the tech culture, which white males have an advantage in, regardless of their economic background. White males don’t always have privilege, that would be a preposterous claim. But it’s pretty lopsided in their favor.

                                                                    1. 3

                                                                      I am specifically talking about the tech industry.

                                                                      It’s probably true if narrowed to tech industry. It seems to favor white and Asian males at least in bottom roles. Gets whiter as it goes up. Unfortunately, they also discriminate more heavily on age, background, etc. They want us in there for the lower-paying stuff but block us from there in a lot of areas. It’s why I recommend young people considering tech avoid it if they’re worried about age discrimination or try to move into management at some point. Seems to reduce the risk a bit.

                                                                  2. 5

                                                                    “It’s not directly because of their race and sex, it’s because of their privilege. That’s the fundamental difference.”

                                                                    But that’s not a difference to other racist/sexist/discriminatory thinking at all. Racists generally don’t dislike black people because they’re black. They think they’re on average less intelligent, undisciplined, whatever, and that this justifies discriminating against the entirety of black people, treating individuals primarily as a product of their group membership.

                                                                    You’re doing the exact same thing, only you think “white people are privileged, they don’t need extra help” instead of “black people are dumb, they shouldn’t get good jobs”. In both cases the vast individual differences are ignored in favor of the superficial criteria of group membership. That is exactly what discrimination is.

                                                                    1. 2

                                                                      You’re right in that I did assume most white males are well off, and it is a good point that they need help too. However, I still think that the ideas of diversifying the tech industry are a worthy goal, and I think that having a dedicated organization that focuses on only the underrepresented groups is valuable. I just don’t think that white males have the same kind of cultural bias against them in participating in this industry that the demographics that Outreachy have, and counteracting that is Outreachy’s goal. Yes, they are excluding groups, but trying to help a demographic or collection of demographics necessarily excludes the other demographic. How could it work otherwise?

                                                                2. 1

                                                                  Why exclude Asians then? Do Asians also already get enough help from their position in society?

                                                                  1. 5

                                                                    Asians are heavily overrepresented in tech. To be fair, the reason we are overrepresented in tech (as in medicine) is likely because software development (like medicine) is an endeavour that requires expertise in challenging technical knowledge to be successful, which means that (unlike Hollywood) you can’t just stick with white people because there simply aren’t enough of them available to do all the work. So Asians who were shut out of other industries (like theatre) flocked to Tech. Black men are similarly overrepresented in the NBA but unfortunately the market for pro basketball players is a bit smaller than the market for software developers.

                                                                    1. 2

                                                                      Do they exclude Asians? I must have missed that one. I don’t think excluding that demographic is justified.

                                                                      1. 3

                                                                        Do they exclude Asians?

                                                                        Yes they do. Quoting Outreachy Eligibility Rules:

                                                                        You live in the United States or you are a U.S. national or permanent resident living aboard, AND you are a person of any gender who is Black/African American, Hispanic/Latin@, Native American/American Indian, Alaska Native, Native Hawaiian, or Pacific Islander

                                                                        In my opinion, this is carefully worded to exclude Asians without mentioning Asians, even going so far as mentioning Pacific Islander.

                                                                3. 5

                                                                  It’s a simple calculus of opprotunity. Allowing those who already have ample opprotunity (i.e. white, cis, males) into Outreachy’s funding defeats the point of specifically targeting those who don’t have as much opprotunity. It wouldn’t do anything to help balance the amount of opprotunity in the world, which is Outreachy’s end goal here.

                                                                  It’s the author’s idea that they deserve opprotunity which is the problem. It’s very entitled, and it betrays that the author can’t understand that they are in a priviledged position that prevents them from receiving aid. It’s the same reason the wealthy don’t need tax cuts.

                                                                  1. 1

                                                                    Outreachy’s end goal seems to be balancing the amount of opportunity in the world for all, except for Asian minority.

                                                                    1. 5

                                                                      Each of us gets to choose between doing good and doing best. The x is the enemy of the y. If Outreachy settles for acting against the worst imbalance (in its view) and leaving the rest that’s just their choosing good over best.

                                                                      You’re also confusing their present action with their end goals. Those who choose “best” work directly towards their end goal, but Outreachy is in the “good” camp. By picking a worst part of the problem and working on that part, they implicitly say that their current work might be done and there’ll still be work to do before reaching the end goal.

                                                                  2. 4

                                                                    What’s not up for debate is that companies or people should be able to give their money however they feel like it.

                                                                    That is debatable. But, I too think Outreachy is well within their rights.

                                                                  3. 7

                                                                    I’m not going to complain about discrimination in that organization since they’re a focused group helping people. It’s debatable whether it should be done differently. I’m glad they’re helping people. I will note that what you just said applies to minority members, too. Quick example.

                                                                    While doing mass-market, customer service (First World slavery), I ran an experiment treating everyone in a slightly-positive way with no differences in speech or action based on common events instead of treating them way better than they deserved like we normally did. I operated off a script rotating lines so it wasn’t obvious what I was doing. I did this with different customers in new environment for months. Rather than appreciation, I got more claims of racism, sexism, and ageism then than I ever did at that company. It was clear they didn’t know what equal treatment or meritocracy felt like. So many individuals or companies must have spoiled them that experiencing equality once made them “know” people they interacted with were racist, sexist, etc. There were irritated people among white males but they just demanded better service based on brand. This happened with coworkers in some environments, too, when I came in not being overly selfless. The whites and males just considered me slightly selfish trading favors where a number of non-whites or women suspected it was because they were (insert category here). They stopped thinking that after I started treating them better than other people did and doing more of the work myself. So, it was only “equal” when the white male was doing more of the work, giving more service in one-way relationships, etc.

                                                                    I’d love to see a larger study done on that kind of thing to remove any personal or local biases that might have been going on. My current guess is that their beliefs about what racism or sexism are shifted their perceptions to mis-label the events. Unlike me, they clearly don’t go out of their way to look for more possibilities for such things. I can tell you they often did in the general case for other topics. They were smart or open-minded people. Enter politics or religion, the mind becomes more narrow showing people what they want to see. I spent most of my life in that same mental trap. It’s a constant fight to re-examine those beliefs looking at life experiences in different ways.

                                                                    So, I’m skeptical when minority members tell me something was about their status because I’ve personally witnessed them miscategorizing so many situations. They did it by default actually any time they encountered provable equality or meritocracy. Truth told, though, most things do mix forms of politics and merit leaning toward politics. I saw them react to a lot of that, too. I’m still skeptical since those situations usually have more political biases going on than just race or gender. I can’t tell without being there or seeing some data eliminating variables what caused whatever they tell me.

                                                                    1. 18

                                                                      So, in your anecdotal experience, other people’s anecdotal experience is unreliable? 😘

                                                                      1. 5

                                                                        You got jokes lol. :) More like I’m collecting this data on many views from each group to test my hypotheses whereas many of my opponents are suppressing alternative views in data collection, in interpretation, and in enforcement. Actually, it seems to be default on all sides to do something like that. Any moderate listening closely to those that disagree looking for evidence of their points is an outlier. Something wrong with that at a fundamental level.

                                                                        So, I then brought in my anecdotes to illustrate it given I never see them in opponents’ data or models. They might be wrong with their anecdotes right. I just think their model should include the dissent in their arguments along with reasons it does or doesn’t matter. The existence of dissent by non-haters in minority categories should be a real thing that’s considered.

                                                                      2. 3

                                                                        I think that the information asymmetry that you had with your anecdotes affected some of the reactions you got. For one, if someone considers your actions negative in some way, they are conditioned by society to assume that you were being prejudiced. If your workplace was one that had more of a negative connotation (perhaps a debt collection service or what have you) that goes double. That’s a reason for the percieved negativity that your white male colleagues didn’t even have to consider, and they concluded that you were just being moderately nice. Notice that you didn’t have to be specifically discriminatory, nor was it necessarily fair. It’s just one more negative thing that happens because prejudice does exist. I would imagine that you would not have so many negative reactions if you explained exactly what you were doing vis-a-vis the randomization of greetings and such. I think I would discount percieved discrimination if someone did that to me.

                                                                    2. 15

                                                                      Yes, it’s a ludicrous hissy fit. Especially considering that LLVM began at UIUC which, like many (most? all?) universities, has scholarships which are only awarded to members of underrepresented groups–so he’d have never joined the project in the first place if this were truly a principled stand and not just an excuse to whine about “the social injustice movement.” (I bet this guy thinks it’s really clever to spell Microsoft with a $, too.)

                                                                      1. 7

                                                                        That jab “Microsoft with a $” was really uncalled for. You have no evidnece of this. Please stop.

                                                                        1. 10

                                                                          The point is a bit bluntly made, but it’s for a reason. There’s a certain kind of internet posting style which uses techniques like changing “social justice movement” to “social injustice movement” to frame the author’s point of view. Once upon a time “Micro$oft” was common in this posting style.

                                                                          For extreme cases of this, see RMS’ writing (Kindle=Swindle, etc).

                                                                          (The problem with these techniques, IMO, is that they’re never as clever and convincing as the person writing them thinks that they are. Maybe they appeal to some people who already agree with that point of view, but they can turn off anyone else…)

                                                                          1. 2

                                                                            I think there is a difference here. “Microsoft” is not framing any point of view. “social justice movement”, on the other hand, is already framing certain point of view. I think “social injustice movement” is an acceptable alternative to “so-called social justice movement”, because prefixing “so-called” every time is inconvenient.

                                                                      2. 0

                                                                        Without more info it seems persecution complex.

                                                                      1. 1

                                                                        I’m glad this is here. Tcl is an avenue of syntax that seems relatively unexplored that could combine Lisp’s amenity to metaprogramming and the easiest of use of Ruby and friends. I’d like to see Tcl or a derivative make a comeback.

                                                                        1. 22

                                                                          I think it comes down to, if someones reading your code, they’re trying to fix a bug, or some other wise trying to understand what it’s doing. Oddly, a single, large file of sphaghetti code, the antithesis of everything we as developers strive to do, can often be easier to understand that finely crafted object oriented systems. I find I would much rather trace though a single source file than sift through files and directories of the interfaces, abstract classes, factories of the sort many architect nowadays. Maybe I have been in Java land for too long?

                                                                          1. 10

                                                                            This is exactly the sentiment behind schlub. :)

                                                                            Anyways, I think you nail it on the head: if I’m reading somebody’s code, I’m probably trying to fix something.

                                                                            Leaving all of the guts out semi-neatly arranged and with obvious toolmarks (say, copy and pasted blocks, little comments saying what is up if nonobvious, straightforward language constructs instead of clever library usage) makes life a lot easier.

                                                                            It’s kind of like working on old cars or industrial equipment: things are larger and messier, but they’re also built with humans in mind. A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch–this is similar to how new cars are all built with heavy expectation that either robots assemble them or that parts will be thrown out as a unit instead of being repaired in situ.

                                                                            1. 6

                                                                              You two must be incredibly skilled if you can wade through spaghetti code (at least the kind I have encountered in my admittedly meager experience) and prefer it to helper function calls. I very much prefer being able to consider a single small issue in isolation, which is what I tend to use helper functions for.

                                                                              However, a middle ground does exist, namely using scoping blocks to separate out code that does a single step in a longer algorithm. It has some great advantages: it doesn’t pollute the available names in the surrounding function as badly, and if turned into an inline function can be invoked at different stages in the larger function if need be.

                                                                              The best example of this I can think of is Jonathan Blow’s Jai language. It allows many incremental differences between “scope delimited block” and “full function”, including a block with arguments that can’t implicitly access variables outside of the block. It sounds like a great solution to both the difficulty of finding where a function is declared and the difficulty in thinking about an isolated task at a time.

                                                                              1. 2

                                                                                It’s a skill that becomes easier as you do it, admittedly. When dealing with spaghetti, you only have to be as smart as the person who wrote it, which is usually not very smart :D.

                                                                                As others have noted, where many fail is too much abstraction, too many layers of indirection. My all time worst experience was 20 method calls deep to find where the code actually did something. And this was not including many meaningless branches that did nothing. I actually wrote them all down on that occasion for proof of the absurdity.

                                                                                The other thing that kills when working with others code is the functions/methods that don’t do what they’re named. I’ve personally wasted many hours debugging because I skipped over the funtion that mutated that data it shouldn’t have, judging from it’s name. Pro tip; check everything.

                                                                                1. 2

                                                                                  Or you can record what lines of code are actually executed. I’ve done that for Lua to see what the code was doing (and using the results to guide some optimizations).

                                                                                  1. 1

                                                                                    Well, I wouldn’t say “incredibly skilled” so much as “stubborn and simple-minded”–at least in my case.

                                                                                    When doing debugging, it’s easiest to step through iterative changes in program state, right? Like, at the end of the day, there is no substitute for single-stepping through program logic and watching the state of memory. That will always get you the ground truth, regardless of assumptions (barring certain weird caching bugs, other weird stuff…).

                                                                                    Helper functions tend to obscure overall code flow since their point is abstraction. For organizing code, for extending things, abstraction is great. But the computer is just advancing a program counter, fiddling with memory or stack, and comparing and branching. When debugging (instead of developing), you need to mimic the computer and step through exactly what it’s doing, and so abstraction is actually a hindrance.

                                                                                    Additionally, people tend to do things like reuse abstractions across unrelated modules (say, for formatting a price or something), and while that is very handy it does mean that a “fix” in one place can suddenly start breaking things elsewhere or instrumentation (ye olde printf debugging) can end up with a bunch of extra noise. One of the first things you see people do for fixes in the wild is to duplicate the shared utility function, and append a hack or 2 or Fixed or Ex to the function name and patch and use the new version in their code they’re fixing!

                                                                                    I do agree with you generally, and I don’t mean to imply we should compile everything into one gigantic source file (screw you, JS concatenators!).

                                                                                    1. 3

                                                                                      I find debugging much easier with short functions than stepping through imperative code. If each function is just 3 lines that make sense in the domain, I can step through those and see which is returning the wrong value, and then I can drop frame and step into that function and repeat, and find the problem really quickly - the function decomposition I already have in my program is effectively doing my bisection for me. Longer functions make that workflow slower, and programming styles that break “drop frame” by modifying some hidden state mean I have to fall back to something much slower.

                                                                                      1. 2

                                                                                        I absolutely agree with you that when debugging, it boils down to looking and seeing, step by step, what the problem is. I also wasn’t under the impression that you think that helper functions are unnecessary in every case, don’t worry.

                                                                                        However, when debugging, I still prefer helper functions. I think it’s that the name of the function will help me figure out what that code block is supposed to be doing, and then a fix should be more obvious because of that. It also allows narrowing down of an error into a smaller space; if your call to this helper doesn’t give you the right return, then the problem is in the helper, and you just reduced the possible amount of code that could be interacting to create the error; rinse and repeat until you get to the level that the actual problematic code is at.

                                                                                        Sure, a layer of indirection may kick you out of the current context of that function call and perhaps out of the relevant interacting section of the code, but being able to narrow down a problem into “this section of code that is pretty much isolated and is supposed to be performing something, but it’s not” helps me enormously to figure out issues. Of course, this only works if the helper functions are extremely granular, focused, and well named, all of which is infamously difficult to get right. C’est la vie.

                                                                                        Anyways, you can do that with a comment and a block to limit scope, which is why I think that Blow’s idea about adding more scoping features is a brilliant one.

                                                                                        On an unrelated note, the bug fixes where a particular entity is just copied and then a version number or what have you is appended hits way too close to home. I have to deal with that constantly. However, I am struggling to think of a situation where just patching the helper isn’t the correct thing to do. If a function is supposed to do something, and it’s not, why make a copy and fix it there? That makes no sense to me.

                                                                                        1. 1

                                                                                          It’s a balance. At work, there’s a codebase where the main loop is already five function calls deep, and the actual guts, the code that does the actual work, is another ten function calls deep (and this isn’t Java! It’s C!). I’m serious. The developer loves to hide the implementation of the program from itself (“I’m not distracted by extraneous detail! My code is crystal clear!”). It makes it so much fun to figure out what happens exactly where.

                                                                                    2. 2

                                                                                      A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch

                                                                                      I do quick patches in Haskell all the time.

                                                                                      1. 1

                                                                                        Ill add that one of the motivations of improved structure (eg functional prigramming) is to make it easier to do those patches. Especially anything bringing extra modularity or isolation of side effects.

                                                                                    3. 6

                                                                                      I think it’s a case of OO in theory and OO as dogma. I’ve worked in fairly object oriented codebases where the class structure really was useful in understanding the code, classes had the responsibilities their names implied and those responsibilities pertained to the problem the total system was trying to solve (i.e. no abstract bean factories, no business or OSS effort has ever had a fundamental need for bean factories).

                                                                                      But of course the opposite scenario has been far more common in my experience, endless hierarchies of helpers, factories, delegates, and strategies, pretty much anything and everything to sweep the actual business logic of the program into some remote corner of the code base, wholly detached from its actual application in the system.

                                                                                      1. 7

                                                                                        I’ve seen bad code with too many small functions and bad code with god functions. I agree that conventional wisdom (especially in the Java community) pushes people towards too many small functions at this point. By the way, John Carmack discusses this in an old email about functional programming stuff.

                                                                                        Another thought: tooling can affect style preferences. When I was doing a lot of Python, I noticed that I could sometimes tell whether someone used IntelliJ (an IDE) or a bare bones text editor based on how they structured their code. IDE people tended (not an iron law by any means) towards more, smaller files, which I hypothesized was a result of being able to go-to definition more easily. Vim / Emacs people tended instead to lump things into a single file, probably because both editors make scrolling to lines so easy. Relating this back to Java, it’s possible that everyone (with a few exceptions) in Java land using heavyweight IDEs (and also because Java requires one-class-per-file), there’s a bias towards smaller files.

                                                                                        1. 1

                                                                                          Yes, vim also makes it easy to look at different parts of the same buffer at the same time, which makes big files comfortable to use. And vice versa, many small files are manageable, but more cumbersome in vim.

                                                                                          I miss the functionality of looking at different parts of the same file in many IDEs.

                                                                                      2. 3

                                                                                        Sometimes we break things apart to make them interchangeable, which can make the parts easier to reason about, but can make their role in the whole harder to grok, depending on what methods are used to wire them back together. The more magic in the re-assembly, the harder it will be to understand by looking at application source alone. Tooling can help make up for disconnects foisted on us in the name of flexibility or unit testing.

                                                                                        Sometimes we break things apart simply to name / document individual chunks of code, either because of their position in a longer ordered sequence of steps, or because they deal with a specific sub-set of domain or platform concerns. These breaks are really in response to the limitations of storing source in 1-dimensional strings with (at best) a single hierarchy of files as the organising principle. Ideally we would be able to view units of code in a collection either by their area-of-interest in the business domain (say, customer orders) or platform domain (database serialisation). But with a single hierarchy, and no first-class implementation of tagging or the like, we’re forced to choose one.

                                                                                        1. 4

                                                                                          Storing our code in files is a vestige of the 20th century. There’s no good reason that code needs to be organized into text files in directories. What we need is a uniform API for exploring the code. Files in a directory hierarchy is merely one possible way to do this. It happens to be a very familiar and widespread one but by no means the only viable one. Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway. We could just store that on disk as a single structured binary file with a library for reading and modifying it.

                                                                                          1. 3

                                                                                            Yes! There are so many more ways of analysis and presentation possible without the shackles of text files. To give a very simple example, I’d love to be able to substitute function calls with their bodies when looking at a given function - then repeat for the next level if it wasn’t enough etc. Or see the bodies of all the functions which call a given function in a single view, on demand, without jumping between files. Or even just reorder the set of functions I’m looking at. I haven’t encountered any tools that would let me do it.

                                                                                            Some things are possible to implement on top of text files, but I’m pretty sure it’s only a subset, and the implementation is needlessly complicated.

                                                                                            1. 1

                                                                                              Anyone who truly thinks this would be better ought to go learn some lisp.

                                                                                              1. 1

                                                                                                I’ve used Lisp but I’m still not sure what your point is here. Care to elaborate?

                                                                                                1. 2

                                                                                                  IIRC, the s-expr style that Lisp is written in was originally meant to be the AST-like form used internally. The original plan was to build a more suggared syntax over it. But people got used to writing the s-exprs directly.

                                                                                                  1. 1

                                                                                                    Exactly this, some binary representation would presumably be the AST in some form, which lisp s-expressions are, serialized/deserialized to text. Specifically

                                                                                                    It happens to be a very familiar and widespread one but by no means the only viable one.

                                                                                                    Xml editors come to mind that provide a tree view of the data, as one possible alternative editor. I personally would not call this viable, certainly not desirable. Perhaps you have in mind other graphical programming environments, I haven’t found any (that I’ve tried) to be useable for real work. Maybe you have something specific in mind? Excel?

                                                                                                    Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway

                                                                                                    The resulting parse can depend on the environment in many languages. For example the C preprocessor can generate vastly different code depending on how system variables are defined. This is desirable behavior for os/system level programs. The point here is that in at least this case the source actually encodes several different programs or versions of programs, not just one.

                                                                                                    My experience with this notion that text is somehow not desireable for programs is colored by using visual environments like Alice, or trying to coerce gui builders to get the layout I want. Text really is easier than fighting arbitrary tools. Plus, any non text representation would have to solve diffing and merging for version control. Tree diffing is a much harder problem than diffing text.

                                                                                                    People who decry text would have much more credibility with me, if they addressed these types of issues.

                                                                                              2. 1

                                                                                                Yes, I’m 100% in agreement.

                                                                                            2. 2

                                                                                              That’s literally true! I am work with some of the old code and things are really easy. There are lots of files but all are divided into such an easy way.

                                                                                              On the other hand, new project that is divided into lots of tier with strick guidelines, it become hard form me to just find a line from where bug occur