[Comment removed by author]
My boss at my previous company was dogmatic to the point of perversity. During code reviews, strict adherence to coding style guidelines (PEP-8+100 for Python, his own for C that included shudder Hungarian notation) was more important than the actual meaning of the code.
I get that consistent style is important, I do, but we spent more time in code reviews discussing how to format our commit messages than we did talking about the algorithms being developed.
Sometimes in C I really need try/finally, and the easiest, clearest, most maintainable way to do that in C is with goto statements to a cleanup label at the end of the function. I cringe when I want to do that because I know the battles that will come, due to another “considered harmful” article.
“If you want to go somewhere, goto is the best way to get there.” – Ken Thompson
This is why I like tools like gofmt and rustfmt. They make a lot of these decisions for you, automatically, so that there’s nothing to argue about.
I don’t know about anyone else, but I found this substantially more complicated than it needed to be.
First, read the intro to “How to Fold a Julia Fractal.” The whole thing is long, but you only need the part that explains imaginary numbers. They’re “numbers that turn.”
Then, look at how a quaternion is represented: (1, i, j, k). The 1 is a scalar value indicating that no scaling is happening. Then i, j, and k are just “numbers that turn,” being used to describe rotation in three distinct axes. So altogether, a quaternion is a compact way to describe three-dimensional rotation.
There’s nothing special about the fact that three imaginary components are being used, except that we live in three dimensional space, and so real-world things rotate in three dimensions. If you wanted to model a higher-dimensional world, you could imagine rotation (say, with 8 dimensions) being represented like this: (1, i, j, k, l, m, n, o, p), with each of the imaginary number representing one dimension of rotation.
Note of course that none of this gets much into the math. It’s just a way for understanding quaternions intuitively.
I’m going to be nitpicky:
Clojure is a dynamic language. No matter where you stand on the static vs. dynamic typing debate, knowing languages in both camps is important. Clojure has a kind of optional typing, but in essence it’s dynamic.
There is no such thing as a dynamic language. I’ll repeat: there is no such thing as a dynamic language. There are many axes along which a language can be dynamic, including:
Dynamicity is a property of these concepts. You can have a language with dynamic scope and types. That doesn’t make it a “dynamic language.” The first has a technical meaning, the other is marketing speak.
The same should be said of “strong typing” which is an utter nonsense phrase.
can you think of a single common use of “dynamic language” that does not refer to dynamic types? it’s a perfectly fine shorthand for “dynamically typed language”; the other properties you mention may be there in addition but they are not the motivation for the term
Yes, for example I’ve heard the term used to refer both to Perl and Common Lisp’s use of dynamic scope.
Even if it’s correct by usage (which I do not think it is), it’s imprecise. In fact, even the thing it stands for is imprecise. “Dynamic types” aren’t types. They follow completely different rules and work completely differently.
There’s a lot of imprecision in the terms used by laypeople to discuss programming languages, and there’s no reason for it when perfectly good and precise terms exist.
And dynamic scope isn’t scope either. So-called “dynamic scope” is just syntactic sugar for inserting and removing everything into and from a single table that exists in the global scope.
Well, all scope is just syntactic sugar. A single global table that maintains a stack of past bindings and unwinds them automatically as you leave a binding scope seems pretty scope-like to me, in the sense of being a programming abstraction.
No, scope is not syntactic sugar. You can’t reproduce multiple (lexical, there is no other kind) scopes in a uniscoped language. Just like you can’t reproduce multiple (static, there is no other kind) types in a unityped language.
(I’m going to ignore the Church of Bob Harper newspeak and bite anyway.)
Do you have some specific sense in which you mean “reproduce”, beyond simply being able to program in a style where you have lexical variable name resolution, and can’t refer to variable names not in the lexical scope they’re in? Unless I’m missing what you mean, you can implement that notion of scope in a Lisp that doesn’t have it by using macros to statically resolve names. Several possible strategies for that, just like there are a number of strategies for implementing lexical scoping in a compiler. For example, you can name-mangle every scoped variable to a distinct global variable name, and then have a pile of macros statically resolve the bindings. Some older Algol-style languages’ compilers did almost exactly something like that, although not implemented as macros within the language.
Granted, this isn’t “safe” in the sense that someone with some effort can get around the scoping, but that’s also true of other languages that are conventionally said to have scopes, like C.
beyond simply being able to program in a style where you have lexical variable name resolution, and can’t refer to variable names not in the lexical scope they’re in?
Well, that’s the whole point: to deem non-closed expressions meaningless outside of a context where their free variables are resolved.
For example, you can name-mangle every scoped variable to a distinct global variable name, and then have a pile of macros statically resolve the bindings.
Have fun implementing recursive procedures that way.
Have fun implementing recursive procedures that way.
There is no problem with recursion if said global mangled variable is bound to stack of values :)
So then you need a hardcoded stack abstraction, which is somehow immune to being wrongly manipulated by users. For instance, if the operation “push into a stack” uses a local variable, then the user can’t mess with that variable.
Lisp provides you with facilities for generation of names that do not collide with user code. This is what hygiene is in context of macros.
But you are technically correct - every abstraction can be broken by sufficiently motivated user :)
And if you don’t want to bother, there’s always the copout C uses: restrict recursive procedures to the top level. I think C is still normally considered to have scoped variables despite this limitation, though maybe some disagree, and it makes the implementation of scoping particularly simple.
C doesn’t allow nested procedures so in that sense, yes, C restricts recursive procedures to the top level. And yes, C does have scoped variables. In this example:
int x; /* global x */
void foo()
{
int x; /* local x */
/* ... */
}
Each x variable will have a different address. Further more, the address of x inside of foo() can change from call to call. They are distinct variables—C does not “save” the previous version of the global variable x when foo() is called. It can’t—because foo() could call a function bar() that does reference the global x and things would get mighty confused otherwise.
Yes, I agree with that. My point was that if you accept that C’s scoping arrangement really counts as (lexical) scoping (I’m unclear whether @pyon would), then it becomes easier to say that you can implement (lexical) scoping in a language lacking it, which was the bone of contention above— because the C approach to scoping is pretty straightforward to implement in terms of name-mangling the scoped variables into global variables and using some macros to statically resolve the nested bindings.
IME, the FP vs OO debate usually has a lot to do with the state of things right now. Maybe the author is correct in that you could do a lot of these things in either. But who knows because it’s not done. Of the three languages I know of that “successfully” combine FP and OO:
In Ocaml, objects are almost never used. I do see objects show up now and then but very infrequently.
In Scala, I do not have enough experience to comment however people I trust struggle quite a bit with it.
In F#, it seems like it tries to be as much like Ocaml as it can get and still survive in .Net. But I don’t have enough experience in it to draw strong conclusion.
I think the author also misses the mark as he seems to be comparing languages rather than paradigms. In my experience, if one looks at the successful examples of OO languages doing functional things and people liking it, it’s usually because they are explicitly getting away from the OOP tar pit. The problem with objects is they are too powerful. Each object is its own universe and one rarely wants that. Looking at Java incorporating FP concepts and calling it a win for OOP is missing the point, I think. It’s rather an admittance that usually one probably doesn’t want objects and if you hold your nose and squint really hard you can sort of get Java to do that by convention.</flame inducing statements>
Clearly I fall on the FP-side of things.
Of the three languages I know of that “successfully” combine FP and OO:
Arguably the reason this is such a big debate is that “OO” refers to a big pile of things, some of which are problematic (inheritance) and others which are no-brainers (hiding implementation details) and some which are somewhere in the middle (polymorphism, message passing).
The other reason is that like speed, “FP” is a property of programs, not of languages. A language can only be “more FP” or “less FP” in that it contains the means for making FP programs convenient or awkward.
FP is also a big pile of things, or, rather, several big piles of things that vary depending on who you ask that most people pretend are together just one pile of things.
FP isn’t a technical definition. There are a couple good technical definitions for things like “pure” and “value semantics” and sometimes people use these in place of “FP”. There are also cultural movements and quacks-like-a-duck style arguments to be had. They all fall down the same way defining OO does.
The author has it dead on: interesting questions only exist beneath the surface here. Beyond that, saying FP or OO is nothing more than stating your nationality.
Rust seems to successfully combine them. If you discount the immutability requirement so does Go. The boundaries for the two paradigms are definitely blurring. C# has lambdas and closures and has had for a long time. Java recently got them. I don’t think the current division will last long.
Rust is not OO. Rust’s style can be thought of as closer to C-style structured programming with Haskell’s type system.
In terms of programming style, Rust code reminds me a lot more of C++ style programming than of C-style structured programming. It’s true that it lacks several of the classic OO features C++ supports, especially the runtime oriented ones like dynamic dispatch on subtypes. But inheritance is deemphasized in modern C++ anyway, and the other features that make C++ style programming considerably different than C-style programming largely are present in Rust: RAII, monomorphized generics, exceptions, destructors, etc.
I don’t think that necessarily makes it OO (arguably certain styles of modern C++ are barely OO either), but it really doesn’t look much like C-style structured programming.
C++ is explicitly a multi-paradigm language. One which can be best characterized as “stealing every popular feature from other popular languages since 1983.” You’re right that modern C++ does not match the OO paradigm, but one can still certainly write C++ in an OO style (heck, C++ started as “C with classes,” literally the addition of OO faculties to C).
C++ starts with Simula. He wants to add its style to C in form of C with classes. Then, adds stuff from ALGOL68, Ada, CLU, and ML. So, yeah, a pile of features from other languages all thrown into one. I’ll add they’re also from languages that were sometimes really different from one another. It had to be done with close compatibility to C w/ low-cost abstractions. This situation leads to a language that’s ridiculously difficult to compile or even learn easily.
So, yeah, multi-paradigm for sure. Not cleanly like LISP either. ;)
Rust is absolutely OO. It has objects with methods. It also has a type system inspired by Haskell. The two are not mutually exclusive. Your statement just makes the OP’s point for him. The lines here are blurring and it’s probably a good thing.
Rust does not have objects, nor does it have methods in the way that term is understood in OO.
I can and will get into a technical explanation of why characterizing Rust as OO is wrong, but I think the philosophical perspective is more important:
Programming paradigms are mental models for thinking about programs. When a programming language is said to fit a particular paradigm, it means that the language provides facilities which are conducive to the writing of code which flows neatly from the model that paradigm encapsulates. When I say that a language is functional, I do not mean that it has the particular qualia of functional programming (as any attempt to list out a set of necessary and sufficient conditions for this categorization based on those qualia is doomed to fail), I mean that when I mentally model my program in the manner implies by the paradigm, I can easily convert that model into real code in the given language.
From this perspective, Rust is not OO, as an OO mental model is one where the program is understood as a collection of objects in a hierarchy communicating between each other to facilitate some behavior. This is not the mental model for which Rust is designed. You may be able to translate from such a mental model into a Rust program, but as anyone who has tried will tell you, it is real translation work. Strike one against the notion that Rust is OO.
Now, to some technical points. The difficulty here is that there is more than one characterization of the necessary and sufficient conditions for OO categorization. Rather than wade into that argument, I will take a look at a few of the features commonly agreed to be necessary, if not sufficient, and look at whether Rust meets them.
First, one distinction commonly found between the OO paradigm and the functional paradigm is how polymorphism is provided for. In the world of functional programming, parametric polymorphism is more often found. In OO, it is more often subtype polymorphism. Rust’s focus is clearly on parametric polymorphism, facilitated by the constraint mechanism of traits. Same as Haskell with its typeclasses. Now, Rust does have a limited amount of subtype polymorphism, but is found solely in polymorphism over lifetimes, to allow for the use of something which lives longer than a required constraint. A more general mechanism for subtype polymorphism that one would generally expect in an OO language (usually provided for via inheritance) does not exist in Rust. Strike two against the notion that Rust is OO.
Second, let us return to the notion of methods. Rust does have functions which can be called using the familiar “method notation” of data.function_name() with the data being implicitly passed as a first parameter. But using this syntax does not mean that Rust’s facility here is akin to methods in OO languages. In fact, Rust lacks a key feature here, which distinguishes it. In other OO languages, you have something called “open recursion.” Bob Nystrom has a nice explanation of this term, which I encourage you to read. What this means is that all the methods for a given objects are visible to each other, and that methods defined on a base objects have access to the real receiver of the method call (which may be some subtype of the base object). Rust does not have this, as Rust does not have inheritance, and the extension of this to “open recursion over subtype lifetimes” is essentially meaningless, as the lifetime system doesn’t parameterize functions over concrete lifetimes anyway, but on abstract lifetime constraints (but I digress). Anyway, strike three against the notion that Rust is OO.
I could go on, but I will simply add to this that questions of how to do OO programming in Rust pop up all the time, and the consistent answer is that it can’t be done and you need to figure out another mental model for working with Rust. Either this is true, and Rust is not OO, or this is wrong, and somehow a bunch of people have all missed how Rust is totally OO and workable with that paradigm. Occam’s Razor certainly suggests the former.
Both of your technical reasons are just “Rust doesn’t have subclassing/inheritance”. This is often associated with OO, but I disagree that it’s a necessary part of OO. Based on a cursory peek, it seems like many definitions online agree with me on this.
Okay, fine. Let’s try some other ones.
I could go on. I gave those two examples because they were the first that came to me, not because they are the only ones.
I think that perhaps the core of this disagreement is that you have a definition of OO that does not fit with the wider more common definition of OO. Which again is sort of the point of the OP’s article. Being precise about what particular feature/property someone means when they say something is OO or Functional is a far more useful conversation than the general terms OO and Functional can communicate.
For instance, I find Rust’s properties of Immutability by default, Objects with Methods defined on them, Dynamic Dispatch, ADT or Enum datatypes, and Pattern Matching to be quite useful. That is both a more detailed and more useful discussion than whether or not Rust is a Functional or Object Oriented language.
The items I listed are literally the heading items from the Wikipedia page for object-oriented programming.
Instead of Wikipedia, I’d say describe which items on your list were present in Simula, Smalltalk, and C++ since they’re the languages that brought us OOP style either inventing it or popularizing practical use of it.
I’m not going to play the game of which definition of OO is the perfectly right one, trying the proper incantation of features to analyze to show that Rust isn’t OO. I have made my case in terms of the most common features which available references list as representative of the OO paradigm. If you are unpersuaded, fine. I will not follow moving goalposts.
I didn’t see a game, never mentioned Rust, and wasn’t trying to be persuaded. I just noted that Simula invented OOP with Smalltalk and C++ popularizing it from different angles. It seems what defines OOP would be in those, esp Simula or Smalltalk. Anyone redefining it from there would be likely moving goal posts since those two distilled its essence in powerful form a long time ago.
And then much conversation followed by all kinds of people who didn’t invent anything like that but copied parts of their concepts. Much debate focuses on those successors when the pioneers might clear some things up.
You’re commenting in a comment chain about whether Rust is an OO language or not, where I have taken the stance that it isn’t. Each post (yours is the third) asked for an argument based on a different standard. I was willing to do that twice. Not a third time.
Also, I hardly think that judging OO today with anything other than Simula, Smalltalk, or C++ is moving goalposts. If it is, the goalposts moved so slowly that we’ve all had time to adjust. I have never in my life met a Simula or Smalltalk programmer. The ability to even find people who understand OOP as it was originally done in these languages is limited. It’s simply not a reasonable standard when discussing what OOP is commonly understood to be today.
It seems what defines OOP would be in those, esp Simula or Smalltalk
Maybe in some discussion, but the OOP 99% of developers are using today is really disjoint, from Smalltalk at least. Java, C#, and C++ share very little in common with it. I can’t speak for Simula.
What you’ve said about “open recursion” is really interesting. Do you have any links to really useful applications of open recursion in terms of object hierarchies?
Thanks! I’m not quite sure what you mean by that. Are you asking for examples of where open recursion is required / used in OOP?
Well, first let’s clarify what’s meant by “recursion” here. In this context, “recursion” means that the methods of a class are visible to each other; they can see each other’s names. So, this works:
#!/usr/bin/env ruby
class Counter
def initialize
@value = 0
end
def increment
set(get + 1)
end
def set(value)
@value = value
end
def get
@value
end
end
In this example, increment can see the set and get functions without them having to be in a particular order, and without any special care or effort on the part of the programmer. This is “recursion” as it is meant in the term “open recursion.”
So what about the “open” part? Here “open” means “late-bound,” or put another way “including both functions already defined, and functions which be defined in subclasses introduced later.” Here’s what that looks like:
#!/usr/bin/env ruby
class Counter
def initialize(value: 0)
@value = value
end
def increment
set(get + 1)
end
def set(value)
@value = value
end
def get
@value
end
end
class LoggingCounter < Counter
def get
puts "getting #{@value}!"
@value
end
def set(value)
puts "setting #{value}!"
@value = value
end
end
In this example, LoggingCounter inherits from Counter, and redefines get and set to print what they’re doing. If you create an instance of LoggingCounter and call increment on it, you’ll see that it does indeed call the get and set defined in LoggingCounter and not the get and set defined in Counter. This is “open recursion” because the implicit “this” (or “self”) being used to determine the object on which to find the get and set methods is late-bound. Rather than being determined statically, it is determined dynamically. At runtime, the language sees that the object on which the increment method is being called has defined get and set, and it calls those get and set functions, rather than the get and set functions from the original class in which increment was defined.
That’s open recursion!
Does Rust support open recursion via traits? IIRC, in Haskell, open recursion is the one situation where the dispatch table of a typeclass cannot be compiled away, if a compiler were so inclined. Although, I believe open recursion shows up differently in Haskell than in the standard OOP example due to the lack of a type heirarchy, but I don’t have an example handy.
Are you asking for examples of where open recursion is required / used in OOP?
Yes, exactly. Your example in Ruby is a very clear example of how subclass methods can be called by superclass methods. It’s hard for me to imagine doing this in a language without implementation inheritance. In fact I think I now understand what the point of implementation inheritance is! For the last approximately 10 years I’ve believed it was useless. On the other hand, I basically never use implementation inheritance, which suggests to me that open recursion really isn’t that necessary is a program design tool.
So are there designs where open recursion (and therefore implementation inheritance) is critically needed? I can’t think of any off the top of my head. Perhaps something to do with creating new widgets in an GUI system?
I’m not sure it’s critically needed, but I’ve seen implementation inheritance commonly used in statistical machine learning libraries, at least those of a certain peak-OOP era. There are often variations of a basic method that only differ in a few respects (e.g. a different sampling method), so one implementation technique is to have these variants inherit a base implementation and then override the one or two methods that differ for that variant.
Obviously there are non-OO approaches to solving that problem as well. The classic C approach would be to just have one big implementation with a bunch of configuration flags (R packages also often use this approach). An FP approach might be to write a generic version of the method parameterized by functions passed as parameters, so e.g. passing in a customized sampling function as a parameter, overriding the default, plays the same role as overriding the sampling method would in the OO version.
[Comment removed by author]
When I wrote CL for a living (circa 2006-2014), it seemed like the community leaned more toward an OO style (in the sense of using lots of objects and mutable state–of course lambdas were also heavily used.). There was a sense that this was more pragmatic and modern, and purely-functional approaches were dated or overly precious. I’ve heard that this has changed a bit now as FP has grown more popular in general. (It’s also possible that my perceptions were colored by the codebase I worked on, which was already old in 2006 and was heavily OO.)
It is true that OO Style is more common in CL, but I’ve never seen functional approaches being referred to as dated. Personally I think reason is that, although CL is multi-paradigm it is really easy to write in an imperative style with small functions.
But there are a lot of examples of FP in CL. For example a parser combinator library, smug, is written using monadic parsers. Scott McKay, of Symbolics fame, is vocal about writing CL in a functional style. He has written Fset, a functional data structure library and a generalization of map designed to replace the use of loop. He even has a small extension that move the local functions after the body which makes them add less noise to the function definition.
You can in SICL’s Loop implementation to see CLOS being combined with parser combinators. The parsers build the AST and generic functions emit the code.
The antagonistic perspective of FP vs OO is detrimental, why [learn from] not both. Recently I was reading up on parser combinators and found the monadic variant of it. Wadler’s How to turn a failurue into a list of successes did a good job explaining the advantages of using a parser monad and how monads (which I now think of as design patterns derived from math) can help the every day programmer. I wouldn’t jump through hoops when a setf does the job and I wouldn’t write tons of mix-ins when I can just compose small functions.
I think that’s been true for most of CL’s existence… the people who wanted more of an FP style (mainly immutable data, mainly recursion for control flow, etc.) went to Scheme, while CL was a standardization of the “industrial” dialects of Lisp that didn’t have those goals. Although I believe in Lispland only code that is heavily CLOS based would really be considered “OO”, and there’s plenty of CL code that is neither in an FP style nor CLOS based.
I like the tutorial very much, but cannot lose the feeling that at the point of integrating Rust, it’s a huge matter of sneaking around its complexities.
The author seems to be very invested and integrated into Rust development, and I personally lost it when he casually added this “panic-strategy” line to the json-file or where he added the few magic lines to his root source file, which even make #pragma’s look good, in the article “Set Up Rust”. How are you supposed to come up with this unless you know exactly that e.g. Rust is unwinding the target in case of a panic and can deduce it from the chunk of compiler barf slammed on your screen? And this is just an excerpt of quite a few esoteric appearences in this set of articles.
I wouldn’t be able to follow this tutorial, build my own OS and integrate Rust, without fearing that I somehow forgot to disable this or that feature or implication, which could bring everything down once I’m using a certain feature.
Surely the Rust Evangelism Strike Force will teach me right and tell me that knowing about these inner workings is common knowledge, so please forgive me my sins!
I suppose my question is what the alternative is. Low-level programming like this is the realm of arcane magic and strange incantations. Rust, providing the safety mechanisms it does, has to have some facility for turning those mechanisms off, and there has to be careful consideration of the interaction of those mechanisms and the low-level code that circumvents them.
If the question is documentation, there’s little to say other than that things are getting better, and the Rust has only been post-1.0 for a little over two years now. Give it time, and there will be more and better guides and documentation to help navigate the world of low-level programming in Rust. Personally, my hope is that someone publishes a hard-copy book on the topic, with a particular focus on the language of Rust and its use and tricks in low-level programming, and less on the development of a particular system as is seen here.
Tbf, knowing that the stack unwinds on a panic is well-known in Rust. If you think there are any fewer corners in C++…
The tutorial being a bit old, this isn’t even the case anymore necessarily. Rust can be compiled to abort on panic instead.
While I don’t fully agree with the parents angle, they are right in that Rust is a complex language that has ramp-up time to learn.
The /r/rust discussion has a pretty thorough analysis of the points made here.
Are they using, or planning to use, the Rust Language Server and/or Racer completion?
edit: I’m a fool; they have an entire section on the FAQ dedicated to this. They plan to not use either RLS or Racer.
Yeah, IntelliJ do their own thing. The tooling they use for every language is created in-house by them.
As soon as something other than a print statement is imported, you can be pretty sure that monads won’t be explained simply.
A monad is a type m for which the following functions may be implemented:
(>>=) :: forall a b. m a -> (a -> m b) -> m b
return :: a -> m a
Subject to the following rules:
-- Rule 1
return a >>= k = k a
-- Rule 2
m >>= return = m
-- Rule 3
m >>= (\x -> k x >>= h) = (m >>= k) >>= h
That’s it. That is what a monad is (there are equivalent definitions that some people prefer over this one, but all of them give the same result). Monads get some special treatment in Haskell (do notation requires a monad), but other than that there’s nothing magical about them. Monads aren’t magical things. “Monad” is a name for a common pattern: a type whose definition permits the functions defined above.
What I’d like is an institution that would take a traditional baccalaureat in computer science and all it’s prerequisites and then offerend hand tailored programs that adapt to schedules, learning disabilities and any other classical roadblock to higher education.
Basically just a guided way for self-learners to attain higher learning creds their own way would do.
I think that’s basically community college. They usually offer night classes, flex schedules, etc. I’ve heard some horror stories about “C++ 101” at some places, but it could be an option for people to consider. At least they don’t pretend to jam you through in 12 weeks.
I attended a community college part-time for years. In some ways it was great, in others it was awful. I never finished my degree – I was never able to finish the online pre-calculus course that was a requirement to take calculus, which was a requirement to graduate with an A.A.S.
I took two semesters of Java, but barely got to the concepts of stacks and queues, and didn’t even touch on trees or recursion. “You don’t need that stuff,” my teacher said. And it was probably true. Most of my classmates never even used Java again – they went on to write Visual Basic at local businesses if the did any programming at all.
I also took classes in Visual Basic, Microsoft Access, and basic networking. Those were all at least enjoyable, and the networking one actually taught me stuff I still use! I also took a terrible C# class – the person who was supposed to teach it quit the week before class started, and our new teacher had never written a line of C# in his life.
The greatest thing about community college was that I could get student loans and grants and spend time learning from books and the internet and meetups while doing the bare minimum schoolwork. If I wasn’t someone with the privilege and ability to learn that way, I wouldn’t have been any more prepared than your average 3-month bootcamper. My programming career only got started because I got into Recurse Center. :/ So, while I’m grateful for my community college experience, it’s hard for me to recommend it as a solution to this mess.
Thanks, that matches my understanding although I don’t have the personal experience. I think CC tends to focus on the wrong thing (c++ “for games” seems to be a particular topic) without teaching principles, but there’s still some benefit. You can at least learn that programs are just text, words in a file, and you can make the computer do what you want. Maybe this takes place in a Java setting, but hopefully at least some of the “computers are tools” concept rubs off and you can learn to write VB or ruby or whatever.
There’s the start of maybe a trend towards making community college free as well, plus many more places where it’s not free, but still quite cheap.
Another plus of community college is that it’s more integrated, even if loosely, with the rest of the university system, which leaves options open for deciding later what you really want to do. You can usually transfer CC credits to a 4-year university if you decide later you want a traditional CS degree (or even continue for a masters or PhD), without having to commit up-front to that decision.
I’m from Quebec, so we dont really have the concept of community college, but I went through most of a cegep (our sort of pre-uni college) degree in computer programming and it was way too easy and most of what I learned I did on my own. If I want to go do the trad comp sci uni program I need to do at least two semesters of math and science classes as prerequisites. Now I’ve tried doing this, but I dont know if it’s a mix of wanting to work and raving ADHD, but I could never stay in those classes for more than two months.
Lots of schools offer a non-traditional CS track, at least for graduate programs.
There are problems, though. One is that the SV elites have decided that college is a waste of time unless you’re at Stanford or MIT (and maybe even then). Another is that college is expensive. Boot camps are around $10,000, that’s about 1/5 of what even the cheapest state schools cost. When you factor in the opportunity cost of four years compared with 12 weeks the difference is much larger. Finally, many boot camp students already have degrees and some schools are hesitant to enroll students seeking second bachelor’s degrees. I’m not entirely sure why, but I know this from personal experience.
As an aside, IMHO the article hits the nail on the head when it points out that unions (and I would add tenure) are really what SV hates about our education system.
Just to quibble with your numbers: CSU San Bernardino, the school I attended, costs roughly $2,000 per quarter (the specific amount per person may vary due to class-specific fees). With three regular quarters of attendance each year (most people do not do summer classes), that’s roughly $6,000/year. At an average graduation period of five years, that’s $30,000, which is not nothing, but would put the ratio at 1/3 against a $10,000 bootcamp, not 1/5.
Wow, that’s incredibly reasonable! Are you sure that includes fees and not just tuition? Are you sure it hasn’t gone up since you graduated? If so, it’s nice to see that California has held the line on costs, at least at some schools.
It has gone up slightly, to I believe about $2,200 (my fiance is about to graduate from the same school. I can ask her for the exact amount if you’d like to know). I should note that even these amounts are hotly contested and opposed for a number of reasons, including:
I could go on. I just wanted to make clear that although the tuition for the CSU campuses remains low relative to many other four year degree programs, the current state of things is not perfect, and there are many avenues for improvement.
That’s very interesting. I knew that CSU’s mission was accessibility, but I wasn’t aware of the details. It’s really sad to see the directions things are heading.
It is. While some of the thingsI laid out above are specific to the CSU, they are part of a larger trend across higher education of rising costs, bloated administrations, more adjunct positions, fewer tenure-track positions, and less public investment.
If you look around in the literature, there are a lot of competing explanations for why all of this is happening. My sense is that it’s a confluence of causes, including:
I could go on. Suffice to say that this is a complicated issue, and that these trends are happening for a lot of reasons. I do hope that they correct over time, and that some of today’s worrying trends are stymied sooner rather than later.
I know I’ve already commented, but I have a separate point which deserves its own post:
California’s three-tier higher education system is really interesting, and the way it works today is quite a bit different from how it was initially envisioned.
First, you have the community colleges. These are intended to be 2-year degree institutions providing low-cost, flexible education opportunities. They are very community focused, and often target people whose life circumstances would otherwise keep them from an education. For example, a person who works a 40-hour/week job and can only attend classes at night would not be able to get the education they want at a 4-year institution, but often can at a community college.
Second, you have the California State Universities. These were originally intended to be 4-year undergraduate-only institutions with a focus on providing low-cost (though higher than the community colleges) education. A number of CSU campuses actually began as trade schools (CSUSB began as a school for teachers), and the focus on providing concrete professional-oriented education remains. The CSU is where someone who wants a good degree-requiring career but can’t afford a larger school can go to get a solid education at a reasonable price.
Third, you have the University of California system. These are the universities which grant doctorates, Master’s degrees, and undergraduate degrees. They are more expensive, and the focus is on research and progression of students (generally) into either academia or professional academic positions.
Today, each of these systems has their problems:
The community colleges are extremely overburdened. It turns out a lot of people like the idea of a cheap, flexible, accessible education, and the system simply can’t keep up. Many students go to community college with the intention of covering classes which would be more expensive to take at a CSU and transferring when they have their Associates degree. Many of them end up staying at the community colleges for 4 or more years, unable to get the classes they need to graduate, and often unable to transfer and retain their credits when they do. The general consensus I have seen for these students is that the tradeoff isn’t worth it. Just among my friend group many end up choosing alternative options, including film school or joining the military.
I outlined issues with the CSU in the previous post, and won’t spell them out again here.
The UCs have the same problems as the CSU, really. They’re getting more expensive, and the financial tradeoffs for students don’t necessarily make sense anymore. I am less familiar with the UC system, and so I will leave it here. If you’d like to know more, I can reach out to friends who attended different UC campuses and get their take on the problems currently facing the system.
I want to reiterate a point which I think is important: the fact that these systems are having problems does not mean that they are broken and irredeemable, that they don’t still improve the lives of a great many students, or that they are not worthwhile public investments. It just means that there are things which the public, the students, the employees, the administrators, and the lawmakers have to grapple with in determining their own future in relation to these systems, and the future of the systems themselves.
I spent a few years as a researcher at UC Santa Cruz, and my impression is as you say, the problems are very similar to those at the CSUs. The UCs have traditionally cost more than the CSUs (though still historically quite cheap) because of some mixture of: 1) more expensive facilities, 2) internationally known professors with higher salaries, and 3) lower teaching loads for the professors, since they were expected to also maintain significant research programs. Although at the top-tier UCs (e.g. Berkeley) this was partly offset by the profs bringing in big research grants to cover some of the facilities and salaries.
Today, professors’ salaries are a smaller and smaller percentage of the total though, and the biggest cost increases are capital expenditures (so #1 is still true) and large increases in the number of highly paid administrators. In addition I believe state funding has fallen even more sharply than at the CSUs, because as “flagship” schools the UCs are seen as potential cash cows, able to attract wealthy students from abroad and out-of-state, who pay the higher out-of-state tuition rates.
As far as state funding cuts go, I compiled some numbers 5 years ago on total and per-student funding, inflation-adjusted, 1965–2012, for the UC system as a whole. The headline figure is that in 1965 the state kicked in $24,000/student (2012 dollars), and in 2012 they kicked in $12,800/student. The peak was in the 1980s, when it kicked in around $30k/student.
Thanks for all the info! I’m glad that my understanding is in line with what other people have found.
One interesting point I didn’t make in my other posts is that the CSU campuses are beginning to offer Master’s and Doctorate degrees (CSUSB now offers a Doctorate of Education program, and a number of Master’s degree programs, for example). This sort of shift helps to blur the line between the CSU and UC systems, although of course that blurring will take quite some time to shift public perception.
Different sort of graph: rather than a graph of a term (Bohm tree) I’m referring to the graph of the function that a lambda term represents. Bohm trees are another way to give a nice semantics to the lambda calculus though.
For anyone unfamiliar, Eric Meyer is a giant in the history of web standards and web development. He’s written a ton of books on web standards (mostly CSS) and his blog is full of important writing and advocacy for web standards.
I’m getting a bit of a mixed message on the merit-based judgment bits. It’s mentioned twice that GitHub is a meritocracy, but if all of these accomplishments were truly widely praised as stated, wouldn’t they be rewarded?
I realize we’re only getting one side of the story here. Maybe GitHub really is a cartoon villain and hates having non-white non-male employees. I don’t know. I’d be interested in what their side is, though.
Meritocracies were created as a joke concept. That companies and people believe merit-based judgement is possible is a mistake and doesn’t actually work (in the same way that humans can’t be unbiased).
I see people say this, but I have no idea exactly what they mean. So, stripping away the loaded terms, can you explain?
Do you believe that everyone in a project contributes to its success equally, or do you think that there are no ways to tell who is contributing to a project?
The term itself originated from a satirical work, if that helps. :)
I understand that, but the statement that “merit based judgement is impossible” fascinates me. It seems to me that implies all promotions are mistakes, for example, because it’s impossible to judge that someone deserves a promotion.
It’s one thing to say “I don’t like what meritocracy is associated with”, but it’s something very different to claim “merit based judgement is impossible”. I’m curious to hear exactly what that implies, in the view of krainboltgreene, because it can lead down some interesting rabbit holes.
Basically: people are really, really bad at being unbiased. We are filled with bias. We make all sorts of decisions under the influence of bias. In an ideal world, we recognize the bias, and try to set out procedures and practices that account for it, and limits its influence. In a “meritocracy” you pretend bias doesn’t exist, because you’re just “looking at merit.” Never considering that your ability to make an accurate assessment of “merit” is completely undone by bias.
so you use procedure and practices in order to better identify merit? it’s only a meritocracy if you’re incorrect about who has merit?
i also don’t buy the claim that it’s completely undone at all
Generally, self-describing as a “meritocracy” bespeaks a belief that you’ve weeded out all causes of bias, which suggests you’ve done very little to weed out bias, as in trying you’d become a lot more humble about how hard it is and how imperfect any approximation is.
More generally, any assurance that one meets a reasonable baseline standard is often a negative indicator. “You can trust me, I’m not a conman!” Not frequently said by actually honest people.
So, in terms of specific implications: Does that mean that it’s impossible for a process to exist that will allow an organization to decide on fair promotions?
Ok, if there is a way to approximate it well, doesn’t that imply that it is possible to reduce bias, and asymptotically approach accurate decisions based on measurable attributes?
To be honest, I was expecting pushing the idea of “They’ve been sitting in that chair for 3 years, therefore it’s time for a seniority raise, because we can’t know better” approach, similar to how some unions handle it. Basically, why play when you’ve already lost?
Yeah, it is probably possible to reduce bias effectively. Meritocracies make the mistake of assuming this is easy, or that it can be done without trying at all. This boils down to making decisions based on whatever an individual person thinks “merit” is, and never evaluating whether that definition is correct.
That’s a claim I agree with – being accurate and reducing bias is a continuous, difficult, and iterative process, and in my experience many people never bother to set up the feedback loops and put the effort into recording and reconciling predictions in the way needed to to manage their own bias. I haven’t really interacted with any organizations that describe themselves as a ‘meritocracy’, so I can’t comment on what it means in practice.
The initial statement, though, was that merit based judgment is not possible. That’s a far more philosophically interesting claim, at least to me. It’s not often I get to (edit:)expore a nihilist mindset like that.
What about that belief is nihilist?
Also, I really dislike having a conversation described as “probing a mindset.”
Considering that merit is, in theory, a measure of a person’s impact, it implies that a person’s impact is unknowable. That strikes me as being fairly far down the road to nihilism.
Also, would you prefer the term ‘explore’?
Edit: Now that I think about it, you’re right. It’s less of a nihilist belief system and more of a solipsistic belief system.
so what does work? meritocracy still sounds pretty nice to me even though i know it’s a dirty word now
Meritocracy is impossible. See my comment here.
Meritocracy, if it was even possible, is also bad for producing great things. See https://hbr.org/product/rethinking-rewards/an/93610-PDF-ENG
My point is that there is a claim of meritocracy, but then the author states that they were not rewarded based upon merit. It’s confusing at best.
Are you sure you worded that comment correctly?
“claim of meritocracy” but author refutes and says “they were not rewarded based upon merit”.
I don’t see how that’s confusing.
In a return to its meritocratic roots, the company has decided to move forward with a merit-based stock option program despite criticism from employees who tried to point out its inherent unfairness.
The author states it was a meritocracy, then claims their work was widely praised, yet not rewarded, and finished by saying it is merit-based. Was it only not merit-based in the year this person was there? It’s confusing.
Even if meritocracies were possible, they would be bad for another reason. https://hbr.org/product/rethinking-rewards/an/93610-PDF-ENG
I suppose its possible to believe girls are capable of coding, and then to prove that’s true, subject all their code to hazing like code reviews. “See? I knew you could code!” So you’d simultaneously recruit people and then grind them up. And not grinding them up would be favoritism, of course. Its only fair to give them the chance to prove how tough they are.
Well, not directly. But if you make a culture that pushes people out who don’t fit the mold, you reduce your ability to hire and retain people you would otherwise be able to hire and keep. Big successful businesses by and large try to maintain a welcoming and professional work environment because doing so makes them more competitive and capable.
But if you make a culture that pushes people out who don’t fit the mold, you reduce your ability to hire and retain people you would otherwise be able to hire and keep.
Then maybe don’t make a culture at all.
I mean, it certainly sounds like they had a culture there. Just a different culture than one often sees in tech today. Company culture happens automatically. The question is whether the leadership will take an active and conscientious role in shaping it, or simply let things develop as they may.
A culture of political censorship and constant injection of politics into non-political environments also reduces your ability to hire and keep. After reading this article (and after a friend sent me http://hintjens.com/blog:111 ), I would not consider working for Github. In fact, I’m now extremely skeptical of GitHub’s long-term prospects as a useful service. GitHub is clearly very far along in the Silicon Valley ultra-political middle management lytic cycle, and it might be terminal.
Well, to take something positive out of it, I didn’t know that @coraline ’s team was responsible for first-time contributor badges and the new repo invite email process! Thanks!
Yeah, it sounds like she got a number of good changes implemented while she was there! I’m sorry her experience was so terrible. It’d be great to still have her there, pushing for this sort of change.
Reading between the lines a bit, it sounds like GitHub leadership may have made the move to hire her without really getting buy in from the company about the need for a greater degree of empathy and consideration for marginalized groups. So she comes in having gotten a great song and dance about how they want to change, but the rank and file are hostile.
I see it both ways:
https://lobste.rs/s/js3pbv/antisocial_coding_my_year_at_github#c_h8znxo
They were probably hostile to any changes due to whatever lack of empathy or consideration they had. Maybe some deliberate discrimination, too. Who knows but I assume some to a lot given SV’s demographics. ;) Then, they see a possibly-recognizable, political extremist that attacks or censors anyone disagreeing with her on a campaign to benefit everyone but them. She’s also loads the team up with people as unlike them as possible who all agree people like them are the problem. This combo is a powder keg for political infighting.
This story has enough villains who think they’re heroes to be a lot worse than it was. I’m glad it stayed as civil as it did with some benefits coming out of it. That said, your comment totally neglects the aggressive politics and censorship she pushes in your description of why there would be resistance. I remember one story submitted to Lobsters that thankfully didn’t get upvotes where the author talks about having “two, token, white men.”
https://fusion.kinja.com/what-github-did-to-kill-its-trolls-1793864044
I looked at that thinking, “Really? They say they’re about inclusiveness and equality but just said the word token followed by a race with no serious consequences?” Maybe it was a joke. I doubt it given the article similarly acted like Coraline’s opponents appeared out of thin air and only wanted to stop good deeds. So, again, her political aggression w/ censorship goals and apparently anti-white-male attitude should be considered when assessing response in an organization with lots of people who might not cooperate with such things.
Maybe it was a joke.
Token white men is an obvious joke. Yes, it is not a joke when it refers to minorities but it is a joke when it refers to white men. Yes, in an imaginary world where all races are equal this wouldn’t be a joke. Perhaps the joke is about the fact that we do not live in that imaginary world. Perhaps getting worked up about the diction by an entirely different writer in a one-year old article is completely irrelevant to this thread and has no bearing on the author’s supposed hidden political agenda and anti-white-male attitude.
I honestly do not understand why anybody would give Github the benefit of the doubt. How many awful things are we going to hear about Github, written under the bylines of people who are risking their careers and their reputations – authors who know that anonymous commenters on threads like these will drag their names through the mud a million times over, dredging up old posts and using shitty three-letter acronyms invented by angry white men –, before we start believing that they might contain a kernel of truth? A comment on Lobsters asking Coraline to be more like Martin Luther King, Jr. has 54 upvotes right now; at least 54 people have bought into the idea that if you are less morally upstanding that Martin Luther King, Jr. then you do not deserve to publish an article on your own blog detailing your own experiences. This is a two-billion dollar company that does not need your help defending it. To the people who are writing screeds against the author here, this single blog post is not some sort of silver-bullet shot fired at your culture: You have already won. You have the money, the executives, the jobs, the social networks, the access. You are winning. Congratulations. Jesus fucking Christ.
“Yes, it is not a joke when it refers to minorities but it is a joke when it refers to white men.”
That’s exactly the kind of structural, reverse racism I’ve had to deal with. Also a double standard. They’re in a position of power, they’re minimizing any white/male people as much as possible, they write up an article about what they’re doing, and mention token, white males as a joke. Discrimination ain’t funny. Calling them out on hypocrisy is certainly worth my time.
The bigger part of that article wasn’t the joke so much as it’s a bunch of people with SJW politics misleading readers into thinking they’re taking hate just because they’re minorities or getting people to play nice. They leave out political domination, launching mobs against Github projects, etc. Supports their false narrative where they’re the victims of attackers rather than the attackers themselves meeting both political resistance and just self-defense by those they’re targeting or trying to control.
I think there’s a very thin line that needs to be respected when discussing these topics. This same argument has frequently been used to diminish the arguments of minorities who rightfully speak out against prejudice. I don’t know enough about this particular instance to comment about it, but we should be careful about the vocabulary that we choose in these discussions, because some phrases have unfortunate implications.
In particular, “SJW” has frequently been used as a catch-all phrase to disparage people who speak out against racism/sexism. I think if we’re going discuss this topic on this site, we should have a better understanding of the connotations our words carry.
A lot of this same argument, with the same sort of vocabulary, was invoked by more extreme members of the GamerGate community, who did a massive amount of damage to minorities in the gaming industry. Seeing you use it here damages the credibility of your position for me.
“This same argument has frequently been used to diminish the arguments of minorities who rightfully speak out against prejudice. “
Which doesn’t really matter if it was coming from obvious racists or sexists ignoring data or selectively using it to push their agenda. They can be called out on those grounds. The SJW’s actually like that, though, since it lets them just associate such people with any use of the term and then ask people stop using it. You’re doing that as well but maybe for more honest reasons.
“I think if we’re going discuss this topic on this site, we should have a better understanding of the connotations our words carry.”
I’ve been very specific in at least two comments about what kinds of people SJW term is about. I’ve also linked to examples of their behavior involving forcing a minority in a minority view on people, using sophist tactics, censoring opponents, and going after their jobs or projects. These are not people just fighting racism or sexism that provably exists. I’m one of those people. I obviously wouldn’t dismiss just that with some BS label.
“A lot of this same argument, with the same sort of vocabulary, was invoked by more extreme members of the GamerGate community,”
It’s funny you mention that because it was my first realization these people existed in some big trend. I’ve studied and countered disinformation tactics for quite some time but not known about assaults on media, forums, and so on. The GamerGate reporting I read about in gamer-oriented media was extremely one-sided only showing what the feminists/activists said. I thought it was about minorities expressing some opinions, a relationship breakup w/ revenge porn, and the examples from gamers were all pure hate mail that apparently came out of nowhere. Some smart folks I know sent me a video that blew my mind:
https://www.youtube.com/watch?v=GXZY6D2hFdo&app=desktop
In this video, new information is introduced that I didn’t see in half a dozen articles I read. The author mentions at least two women involved were claiming gamers were unnecessarily violence-loving, sexist, and racist. Whether true or false, that was an attack which has predictable consequences for anyone who knows gamers. One had an academic paper saying how games should be done in a totally different way or developers + players were just evil. On top of it, the females developing games were doing some of the same behavior on the list of No No’s to make money. Interestingly, they also ignore that the supply side responds to market demand, the games that are like their list don’t sell, the games doing the complaints do, and that demand side includes a huge chunk of women. So, they were claiming bad things about all gamers, ignoring women gamers’ views on the matter, and hypocritically doing what they said shouldn’t happen for money. And then the hate rolled in.
Quite justified although obviously not supporting the extremist stuff. The regular gripes, mockery and so on makes sense with that backdrop. The thing that shocked me was it wasn’t reported in the articles I read from publications for gamers. Somehow, the gamers’ side of things with very, legitimate counterpoints was censored. Why was that? Why were these people not mentioning their negative claims about gamers or how they did the same things for money? Why were they only mentioning how they tried to do some nice things about (social justice stuff here) with the gamers just doing horrible things because they’re evil males and stuff who were unprovoked? Then someone told me they were SJW’s with this being their default tactic of looking like a victim, making news afraid to report whole thing, and causing big shitstorms. More research found similar attacks on many social issues where one side made a decree then declared holy war on enemies always claiming harassment, asking foes be censored, and so on.
If you thought GamerGate was evidence feminists were treated unfairly, then it damages your credibility for me because you may have never known what those select few did to gamers, you may not know why the information was censored at media level, and you would’ve been griping about their victims while supporting the original perpetrators. I can’t blame you as I did it early on not knowing anything about how these deceptive, manipulating “activists” operate. Thanks for reminding me about my wake up call on the subject, though. :)
The last two paragraphs are really important.
Be knowledgeable about what’s actually in the C and C++ standards since these are what compiler writers are going by. Avoid repeating tired maxims like “C is a portable assembly language” and “trust the programmer.”
Unfortunately, C and C++ are mostly taught the old way, as if programming in them isn’t like walking in a minefield. Nor have the books about C and C++ caught up with the current reality. These things must change.
There’s no excuse for invoking UB anymore. Either you have a use case that absolutely requires the ridiculous features of C and C++ (in which case you better be damn sure you’re not stepping on a mine), or you’re performing malpractice by using an inappropriate language.
I think a lot of people don’t understand just how easy it is to write code with undefined behavior in it. Because C and C++ programmers often aren’t taught about it when learning the language, when they do hear about it, they think “oh, that’s something that other people’s code has to deal with. I write good code, and I’m this doesn’t affect me. If it did, I would have been taught it when I learned the language.”
This is why I value John Regehr’s work so much. He’s been beating the drum of UB as a danger to take seriously for a while, and I’m really happy to see his stuff getting more traction in the programming community.
I’m going to be teaching a class on C programming soon, and I’ll absolutely put UB front and center in it; maybe with people like John Regehr pushing for better education and better tooling, more people will do the same.
I think anyone trying to learn the language properly needs at a very minimum, the (draft) standards. There are far too many books, tutorials, etc. that just gloss over important details or make grossly misleading simplifications.
Then read (good) man pages for library functions. Read code. Write code. Lots of code. Keep an eye out on some projects and watch it as bugs get fixed. Try to understand the bug and the fix.
A book might or might not help. I’ve only read one book on C – Expert C Programming. It was an ok read but I didn’t learn too much from it. A newcomer might find it more useful however.
EDIT: taossa ch6 is also a great read for anyone doing C. http://ptgmedia.pearsoncmg.com/images/0321444426/samplechapter/Dowd_ch06.pdf
Alternative link: https://trailofbits.github.io/ctf/vulnerabilities/references/Dowd_ch06.pdf
I could agree with that, but there’s a lot of code written 20 years ago or more. And it’s not so easy to simply use an old compiler. Want to run arm64? Get a new compiler. And the arm64 platform would be rather less interesting if you simply banned all old software.
That’s a great point about old software being used on new hardware. In such cases, would it be better to use a compiler with defined (and safe) behavior in all UB? (Presumably the performance of the new hardware is better than the old hardware it was originally written for, so you could afford things like symbolic addresses, bounds checking, “gcc-x86-like” overflow, etc.)
It still seems like it would be better to fix the old software so it doesn’t invoke UB, but we all agree how hard that is.
There’s quite a lot of UB in C, much of it designed so that different platforms can each use their “native” support for various things like addressing modes, alignment, etc. If you really wanted to define all behavior to be essentially equivalent to what gcc on x86 traditionally does, this would amount to a pretty significant runtime layer on non-x86 platforms emulating x86-like behavior. At which point why even use C, not something that actually has a portable runtime and/or bytecode defined from the start? I guess purely for legacy software it’d make sense.
That’s a great point about old software being used on new hardware. In such cases, would it be better to use a compiler with defined (and safe) behavior in all UB?
There are so many people proposing this that eventually one might start to believe it is actually a thing.
But I have not seen such a compiler and I doubt I ever will.
[Comment removed by author]
Apparently, if the value begins with a digit, it attempts to parse it as a UID; if that fails, it logs and ignores the directive (thus defaulting to running the unit as root). If the value doesn’t begin with a digit, it’s treated as a username, and if the lookup fails, the whole unit fails to start.
The scope of vulnerability here is fairly limited (you’d need the ability to provide arbitrary units to the PID-1 systemd instance, which you could use to do much worse things than non-silently and obscurely run the unit as root), but the design is so bad it makes my eyes cross. Syntactically-correct-but-invalid is more fatal than syntactically-incorrect? Soldiering on despite detected parse errors? It’s this sort of thing more than any particular bug or misfeature (not that there aren’t plenty of the latter) that triggers my ire for systemd. Why are these people designing software at all, let alone essential system software?
An example of somewhere where it would matter a lot: You, as the sysadmin, have a service you want to run as a specific user. That service has a remote code execution vulnerability. This bug can cause that service to silently be ran as root instead of a restricted user.
Yeah, it may not be much on its own, but anything which allows the possibility of something silently running at a higher-than-expected privilege level should be considered a security concern.
I never programed OCaml but there is this match/option stuff like Rust’s match/Option/Some/None. So this is where it came from (or rather ML?). Looks really neat!
Rust actually pulls from the ML world in a number of ways. Also, the original Rust compiler was written in OCaml!
There’s a lot of complaining in that post, but ultimately, the compilers aren’t doing anything “wrong” or even technically unexpected. “Undefined behavior” is just that, undefined.
Early versions of GCC took that in classic hacker humor fashion, even. The C89 standard defines the “#pragma” directive as having “undefined behavior”. Early GCC versions would, upon encountering an unknown “#pragma”, launch NetHack or Rogue.
I’m sure we all agree that this compiler behaviour conforms to the letter of the C standard. That doesn’t make it productive or even, I would say, “right”.
In fact, they are doing something wrong. As Jones noted - the standard does not prohibit error flagging or expected machine dependent behavior on UB. The compiler developers made a design choice to assume 0=1 on UB when they could perform “optimizations”. But that’s a stupid design decision. I think the attitude that someone like DJB should just suck it up and waste his time trying to decipher the obscure “nuances” of the standard is ridiculous.
You’re not prohibited from compiling with -fwrapv either. Telling people about this option might actually help them.
Yeah, it’s the sort of thing where the compiler developers have put in the substantial amount of time necessary to really grok all the details of the C standard, and have little care for the idea that other people haven’t and won’t put in that same amount of time. More importantly, that the compiler developers should have some degree of sympathy for those other people and try to avoid making these sorts of assumptions and optimizations that will surprise anyone without the same advanced level of expertise.
To make a small analogy to role playing games, compiler developers are the rules lawyer min-maxers, murderhoboing all over your code. And you’re looking for a good story and a fun experience, and the whole thing is off putting when they make these sorts of aggressive optimizations, and their only defense for being like that is that you should read the standard. The only option becomes to be just like the murderhobo, and that’s not fun for a lot of people.
It’s really worth trying to read the C11 standard explanation of, for example, what type casts are permitted. Look at page 77 of (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf) for example.
I don’t think the compiler developers ever made an actual choice to “assume 0=1” or anything similarly pugnacious. Rather they make the choice to assume that constraints are not violated by the program and make use of this (together with the “undefined behaviour” allowance) to make optimisations that wouldn’t otherwise be possible. The problem is people expecting a certain behaviour from, for instance, signed integer overflow which they may have seen previously - because the compilers didn’t have such sophisticated (or “agressive” if you insist) optimisation as they do today, but never because that behaviour was actually mandated.
Actually, if you don’t rely on signed overflow and you don’t do type punning, C is still a perfectly usable language for encryption or anything else (though there are often much better choices).
I understand what you are saying - and I feel like sometimes the compiler vendors do go too far - but I feel like complaints such as these are often either misdirected (is it the compiler or the language standard to blame?) or come from a some kind of self-righteous indignation (dammit! why won’t this compiler do what I want it to? Who do these compiler developers think they are, telling me the bug is in my code?!!).
Yes, there are obscure nuances which e.g. allow type punning in some cases. But if you avoid type punning altogether - which many languages force you to do - you don’t need to understand those nuances. And if you don’t think of C as a kind of high-level assembly language, then you shouldn’t expect signed integer overflow to work - and if you do think of it as a high-level assembly, then you’re wrong.
I don’t think many C programmers were even aware of the process of adding additional UB. And obviously the standard committee itself even didn’t understand the implications of what they were doing - or else the char* hack would not have been needed.
Who do these compiler developers think they are, telling me the bug is in my code?!!).
The problem is that they do not tell you. Note that using signed integers for for loop counters is about as idiomatic as you can go in C.
I don’t think many C programmers were even aware of the process of adding additional UB
I don’t think there is such a process. What is UB now was always UB. The difference is that in the past, the behaviour tended to more closely mirror behaviour expected due to an understanding of the underlying hardware and an incorrect assumption about how the compiler operated.
Note that using signed integers for for loop counters is about as idiomatic as you can go in C
Sure, but there’s nothing wrong with using signed integers for loop counters. The problem only comes when you write code that expects that if it keeps incrementing such a counter it will eventually jump to a negative value. That’s not idiomatic.
Also, fun fact, loop counters being idiomatically plain int is actually what motivates the compiler to rely on signed overflow not occurring. Otherwise it generates uglier code. For an example: https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759de5a7
My experiments with gcc and clang show that using a unsigned int counter in a loop improves performance a bit.
they are totally simple
void floop(int n, float P[], float v){
for(int i=0; i < n; i++) P[i]= v;
}
void ufloop(int n, float P[], float v){
for(unsigned int i=0; i < (unsigned)n; i++) P[i]= v;
}
run a bunch of times and count
On gcc 7.1 with -O2 these two only differ by one opcode. Only on -O3 they start differing significantly.
On gcc 7.1 with -O2 these two only differ by one opcode.
And it isn’t even in the loop.
Only on -O3 they start differing significantly.
Yet these differences are rather unsubstantial. Differently named labels, swapped operands for cmp, and correspondingly swapped jump opcodes. The most substantial difference is that the code using ints does a few sign extensions. Worth noting is that all those differences take place at the “tail” of the loop, where the few remaining floats that weren’t done by the xmm-wide copy are handled. The hot part of the loop is identical, just as in the -O2 version.
If you’re talking about the proposals of David Keaton, you need to be aware that:
Unfortunately, there is no consensus in the committee or broader community concerning uninitialized reads
In fact, there is a formal proposal here (which comes out of the Cerberus survey) which specifically suggests clarifying that the read of an uninitialized variable does not give UB:
http://www.cl.cam.ac.uk/~pes20/cerberus/n2089.html
If you’re talking about something else, other than minor clarifications of behaviour which was obviously dubious in the first place, I’m curious to hear what it is.
Note the original C ANSI standard included the following
Undefined behavior — behavior, upon use of a nonportable or erroneous program construct, of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements. Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).
This would forbid what we are now told is intrinsic to C compilation. Just as a matter of common sense, “optimizing” “a = b+c; Assert(a> b && a>c)” to remove the assert because it is convenient to assume undefined behavior never happens is absurd. Or consider:
The value of a pointer that referred to an object with automatic storage duration that is no longer guaranteed to be reserved is indeterminate.
I believe that is now UB. Or consider
A pointer to an object or incomplete type may be converted to a pointer to a different object type or a different incomplete type. The resulting pointer might not be valid if it is improperly aligned for the type pointed to. It is guaranteed, however, that a pointer to an object of a given alignment may be converted to a pointer to an object of the same alignment or a less strict alignment and back again; the result shall compare equal to the original pointer.
Compare that to the total hash in C11.
As for the proposal you link to, it’s indicative of a disregard of C practice. For example, the proposal is that if you compute a checksum on a partly initialized data structure, that checksum could be made unstable and unreliable. I don’t get the advantage of ruling the BSD and Linux network stacks to be subject to this sort of nonsense.
Just as a matter of common sense, “optimizing” “a = b+c; Assert(a> b && a>c)” to remove the assert because it is convenient to assume undefined behavior never happens is absurd
I disagree. The reason signed overflow is specified as undefined (and not say some specific set of possible behaviours including eg wraparound, termination and termination) is precisely to allow such an optimisation. For idiomatic use of loop counters, for instance, this can allow for generation of significantly better code. Even allowing those specific behaviours would at least make the behaviour platform dependent.
In any case the assert could only be optimised away if the compiler was certain that b > 0 and c > 0. For the programmer to have written the above and not merely be trying to assert that (b > 0 && c > 0) implies that they were familiar with bitwise representation but ignorant of C’s operator semantics.
However:
For example, the proposal is that if you compute a checksum on a partly initialized data structure, that checksum could be made unstable and unreliable
I agree that this needs a solution, and I agree that the best solution is to mandate that reading an uninitialised variable (including a member of a partly initialised structure) and then reading it again should yield the same value both times. I don’t see much value in that proposal; I linked to it merely to point out that it’s not the case that reading an unitialised value is necessarily going to become undefined behaviour.
This would forbid what we are now told is intrinsic to C compilation.
I don’t read it that way. e.g. I think it’s fair to characterize the compiler’s behaviour in your example as ignoring the situtaion where b + c overflows completely, which is something the standard explicitly permits.
It’s not ignoring it - it is making the assumption that it cannot happen. Ignoring it would involve compiling the code as is, not rewriting the loop. Consider the case DJB notes - where the conforming null check was removed because the compiler assumed if the pointer was null the prior reference would be UB!
Removing a redundant assert is entirely normal and reasonable behaviour for an optimizing compiler, and that particular assert is flagrantly redundant in all situations except the one in which an explicitly permissible behaviour is “ ignoring the situation completely with unpredictable results”.
Idris looks really well designed, and I think these improvements are actually quite significant. Strictness by default is a game-changer for me; apparently the records and monads are more convenient to use (and there are effects, too? Not sure how experimental they are). If Idris was self-hosted, produced good static binaries with performance comparable to OCaml, and had a package manager I would definitely give it a serious try.
Idris also has a quite buggy implementation at the moment, but like everything else you mentioned, it is a solvable problem. I think it’s a contender for a widely used industrial language in the future. Though at the moment it’s mainly used by people with pretty sophisticated FP knowledge, I think its dependent types and effect system may ultimately become something that’s easier for newcomers to understand than a lot of Haskell is.
They are pretty unapologetic about 1.0 not being industry-grade, and it is not quite the goal of the language:
Will version 1.0 be “Production Ready”?
Idris remains primarily a research tool, with the goals of exploring the possibilities of software development with dependent types, and particularly aiming to make theorem proving and verification techniques accessible to software developers in general. We’re not an Apple or a Google or [insert large software company here] so we can’t commit to supporting the language in the long term, or make any guarantees about the quality of the implementation. There’s certainly still plenty of things we’d like to do to make it better.
All that said, if you’re willing to get your hands dirty, or have any resources which you think can help us, please do get in touch!
They do give guarantees for 1.0:
Mostly, what we mean by calling a release “1.0” is that there are large parts of the language and libraries that we now consider stable, and we promise not to change them without also changing the version number appropriately. In particular, if you have a program which compiles with Idris 1.0 and its Prelude and Base libraries, you should also expect it to compile with any version 1.x. (Assuming you aren’t relying on the behaviour of a bug, at least :))
Don’t get me wrong, I believe Idris is a great language precisely because of that: they want to be primarily a research language, but provide a solid base for research happening on top of their core. They have a small team and use those resources well for one aspect of the language usage. I would highly recommend having a look at it and working with it, this is just something to be aware of.
Haskell is great because it’s where a lot of these ideas were tested and figured out. But it also has the cruft of legacy mistakes. Haskell can’t get rid of them now, but other languages can certainly learn from them.
I’m not comfortable adding tags to accommodate unjust policies. Should we add an “anti-Kim Jong-un” tag to accommodate North Koreans who don’t want to be prosecuted for viewing articles that speak against the supreme leader?
The fact that this is only relevant to people with security clearance also puts severe limits on its usefulness. If you have some level of clearance, you would be more inclined to read articles about documents that you are cleared for. So “classified” wouldn’t be enough for people with security clearance to decide whether they’re allowed to read an article. You’d really need a tag for each classification level and handling caveat (SECRET, NOFORN, etc.), which is already taken care of upstream in the headers that agencies put on classified documents.
Just because you’re cleared to handle SECRET information doesn’t mean you have the need to know any particular datum and, anyway, downloading or viewing any level of classified information on unclassified machines is not kosher for anyone– individual classification tags are probably not necessary.
But shouldn’t you be more interested in such an article, as you have potentially more context than the general public, and it’s potentially more relevant to your life?
Interest isn’t the issue. If you have a clearance and don’t have need to know, even if the information is within your level of access to classified material, you cannot legally access the material.
Ah I see. So is it illegal for journalists to look at documents they receive from leakers? Or does this somehow only apply to people with some level of clearance?
It’s not that it “somehow” only applies to people with clearances. People with clearances agree, as part of getting a clearance, to only access classified information within their clearance level and for which they have need to know. This agreement is binding, and violation of it is potentially a crime. The average person has not entered into such an agreement, and more broadly it would be unreasonable and impractical for the government to try and punish all people who become privy to the contents of publicly disclosed but still classified materials.
To be quite blunt: it’s their job to ensure this. Independent of whether we should add a tag or not, making it easier for people entering certain agreements to uphold this agreement is not the job of this website. Browse at your own risk.
We navigate an unlabeled world all the time, and while I’d prefer everything to have a clear label (for other reasons), suddenly making these people the special case where it’s absolutely necessary is odd to me.
making it easier for people entering certain agreements to uphold this agreement is not the job of this website. Browse at your own risk.
Its just a tag. You are making it out to be something that is lots of work for little reward, but it is exactly the opposite, it wouldn’t take much work to add a tag and it helps lots of people to protect their jobs and prevent accidentally becoming a criminal.
“helps lots of people to protect their jobs and prevent accidentally becoming a criminal.”
To help lots of people with clearances clicking links on an obscure forum known to contain illegal releases of classified information and frequented by self-reported hackers. That sounds bad enough for a warrant already. Then, they are possibly in the NSA collection system the moment they open the front page in a scenario like that due to 3 degrees policy depending on if a monitored person replies to thread. With that backdrop, I’m surprised they’d even connect to the site at all without anonymity tools or using a shared access point (library or wifi) for deniability.
I can appreciate the point you’re making here. The number of Lobste.rs readers who would care about such a tag as a means of protecting their jobs is likely small. OTOH, it would serve as an interesting data point for those of us who DO NOT have such jobs and might want to read such articles, and I suspect that audience might be larger.
I’m not saying otherwise. I was simply explaining the legal issues underpinning the ability of people with clearances to read publicly available but still classified material.
I don’t understand your first sentence, which seems to contradict the rest of your comment, but thanks for the info.
I meant that saying “somehow” makes it sound strange and nefarious, and I wanted to disagree with that connotation.
This is correct. It was explained to me that the clearance is a vetting process saying you potentially could access something at that level. The specific things that you can access are what you need access to.
Then there’s extra complexity once we go into ownership (did they authorize officially?), SCI, and SAP’s. Basic concepts of clearance, compartments, and need to know cover vast majority of situations, though.
With respect, are you qualified to judge what is just and what is not in absolute terms? What if the people who work in such jobs consider themselves to in fact be just in their actions?
Nobody is qualified to judge what is just in absolute terms, but that’s no reason to give up all conception of justice. And to be clear I am saying the policies are unjust, not actions of the individuals who work under those policies.
Are they? You’re a government. Your goal is to protect your people and further your goals, economic and social.
You come to realize that there are certain pieces of information which, if they got into the wrong hands, could hurt you (again ‘you’ here being the nation in questioon).
So, you define certain sets of people who can see certain things. Now, I realize, for hard core “all information wants to be free” types, this is a ring zero violation right here. However, for the purpose of this discussion let’s say that not everyone agrees with this as an absolute.
You need to define rules to keep the wrong people from seeing critical information, including penalties to keep these rules from being patently ignored.
What is inherently unjust about the above scenario? The right of a nation to protect its secrets? Or the idea that said nation can legislate what information its employees can or can’t consume? Note that getting a job with clearance is a choice. It’s a voluntary obligation people are putting themselves under.
Or the idea that said nation can legislate what information its employees can or can’t consume?
This would be what I feel is unjust. In particular, when this information is public, it prevents said employees from being informed and engaged citizens. The fact that their employment is voluntary doesn’t make a difference to me - indentured servitude is unjust even if it’s the result of a voluntary agreement.
That is an interesting conundrum, and maybe there’s some room there for reform in the intelligence community.
That scenario you gave is not how the classification systems actually work. They’re a combo of that with political moves and crimes covered up by the classification. In the US, classification of criminal acts isn’t even legal but they do it & punish leakers anyway. Much of defense activity is also driven by corruption where military and politicians get money from contractors plus politicians get votes or jobs in their districts. The possibly-classified justification for or performance of those programs would be lies to justify profiteering on wasted tax dollars. Trailblazer was a recent example.
So, these things are what we need to consider if assessing how just or unjust a classification system is. The U.S.’s is a mixed bag of just classifications, unnecessary classifications (“overclassification”), underperforming in declassification (FOIA), and hiding criminal activity. Definitely needs a ton of reform.
Although, the Jason Society did a proposal for a replacement system that sounded good, too. So, reform or replace.
I don’t understand at all why so much arguing happens over code of conduct pages on projects. Don’t be a fuckin dick to each other, nerds. How is that hard, and how is that hard to enforce?
it happens because some folks know they are dicks and they stick up for other dicks. If you’re working on something alone you can be as much of a dick as you want, but if you’re working on a team it’s pretty fucking reasonable to have some ground rules that everyone agrees on.
If you read any deployed CoC, they’re vastly more overbearing than “don’t be a dick”. If a CoC was literally those four words, I would support it wholeheartedly, but it never stops there.
I also disagree that every social interactions needs explicit rules. I don’t really feel the impulse to codify social interaction. If someone is being a dick, I will respond according to the situation rather than preemptively trying to bring playground-style rules into the mix.
People come from different backgrounds and cultures where one set of behaviours might be socially acceptable, so yes - sometimes, it needs to be spelt out.
How does this work when the power dynamic is working against the person who is harassed? What if the harasser is a star contributor or friend?
Hasn’t “don’t be a dick” been historically insufficient?
Not sure how the code of conduct changes that. If the high council of conduct adjudication are the ones doing the harassing, what happens then?
That is part of the reason why this situation is so contentious; that’s what’s happened here.
Yes if there’s good management or moderation that actually care about the work above the politics. If they value politics more, then it’s not sufficient since they’ll protect or reward the dicks if they politic correctly. The leadership’s priorities and character are almost always the most important factor. The rest of the benefits kind of spread as a network effect from that where good leadership and good community members form a bond where bad things or members get ejected as a side effect of just doing positive, good work or having such interactions. I’ve seen so many such teams.
Interestingly enough for purposes of CoC’s and governance structures, I usually see that break down when they’re introduced. I’m talking governance structures mainly as I have more experience studying and dealing with them. The extra layers of people doing administrative tasks setting policies can create more trouble. Can, not necessarily do since they reduce trouble when done well. Just the indirection or extra interactions are a risk factor that causes problems in most big projects or companies. A good leader or cohesive team at top keeping things on track can often avoid the serious ones.
If it wasn’t broadly worded, it’d be harder to aim at the people we don’t like.
If it wasn’t broadly worded, it would be easier to abuse loopholes in order to keep being a dick within the letter of the CoC.
The things are broadly worded for a reason, and it’s not “to enforce it arbitrarily”.
Is that more of a real or hypothetical concern? Any examples of a project that adopted a code of don’t be a dick, then a pernicious dick couldn’t be stopped, and the project leadership threw up their hands “there’s nothing to be done; we’re powerless to stop his loopholing.”?
Boom, you said it. I’ve usually seen the opposite effect: people make broad rules specifically to attack or censor opponents by stretching the rules across grey areas. Usually, the people surviving in projects of multiple people due to “loopholes” are actually there for another reason. As in, they could be ejected if they were so unwanted but whoever is in power wants them there. Those unstated politics are the actual problem. In other cases, the rules were created for political reasons, often through internal or external pressure, rather than majority of active insiders wanting them there with enforcement pretty toothless maybe in spite. The OP and comments look like that might be the case if they voted 60% against getting rid of this person.
Also, I noticed the number of people and their passion on these “community enforcement” actions goes way up with most of them not being serious contributors to whatever they’re talking about. Many vocal parties being big on actions to control or censor communities but not regularly submit content or important contributions to them. I’m noting a general trend I’ve seen on top of my other claim rather than saying it’s specific to Node which I obviously don’t follow closely. Saying it just in case anyone more knowledgeable wants to see if it’s similar in terms of people doing tons of important work in this project cross-referenced against people wanting one or more key contributors to change behavior or disappear. If my hypothesis applies, there would be little overlap. The 60% number might give indicate unexpected results, though.
EDIT: For broad vs narrow, just remembered that patent trolls do the same thing. They make the patents broad as possible talking up how someone might loophole around their patent to steal their I.P.. Then, they use the patent to attack others who are actual contributors to innovation asking them to pay up or leave the market. Interesting similarity with how I’ve seen some CoC’s used.
Yeah that’s what I don’t get. If someone was being a jerk on a project I was on I wouldn’t think twice about banning them once they’ve proven they’re a repeat offender.
[Comment removed by author]
Do codes help or hinder such agreement? Those I’ve seen applied have largely been counterproductive, as their definition of dickery has not aligned adequately with the wider project community’s.
node.js could serve as an example.
Of the opposite? A code of don’t be a dick doesn’t work in theory because there’s no agreement. So node has this nice long list of banned behaviors and remedial procedures, but what good has that done them? Meanwhile it seems everyone agrees Rod was being a dick, so if the code were that simple it’d be a fine deal.
I mean, I don’t really know what’s going on since it’s all wrapped in privacy, but the more complicated the rules the more likely it is someone will play them against you. Better to keep it simple.
Part of having a CoC is enforcing a CoC. Yeah, the CoC doesn’t mean much if it isn’t enforced, but that’s not an argument against codes of conduct. By anology: the fact that people break laws isn’t an argument against the rule of law.
Right, but if a law didn’t bring any clarity to the community - if it wasn’t clear who was and wasn’t breaking it, or it wasn’t able to be enforced consistently, or it was applied consistently but still seemed to be capricious in who it punished and who it didn’t - then it would be a bad law. The criticism isn’t that this “Rod” broke the CoC, it’s that the CoC didn’t seem to help the community deal with his behaviour any better than it would have without the CoC, indeed possibly worse.
(my general view, particularly based on seeing them in the Scala world, is that CoCs as commonly applied are the worst of both worlds: they tend to be broad enough to have decent people second-guessing everything they say, but specific enough that less decent people can behave very unpleasantly without obviously violating the code)
Sorry bro^Wsibling, it’s not diverse enough. It would have to say “Don’t be an asshole” to be gender-inclusive.
As for the CoCs working, I think it’s unreasonable to expect bad people to turn good because a file was committed into the git repository saying they should.
Maybe something like a Code of Misconduct is even more important than the CoC. The link is for IRL events, and quite obvious, but online the escape hatch is to gtfo.
Interesting. Didn’t know he wrote on that topic. He made some interesting points but oversimplified things. I think Stephanie Zvan has some good counterpoints, too, that identified some of those oversimplifications with a lot of extra details to consider. Her focus on boundaries over democratic behavior or tolerance reminded me of a video someone showed me recently where Kevin Spacey’s character argued same thing with appeal to a more mainstream audience:
https://www.youtube.com/watch?v=sFu5qXMuaJU
She’s certainly right that a focus on boundaries with strong enforcement can create followers of such efforts and stability (conformance) within areas they control. Hard to say if that’s idea versus the alternative where other folks than those setting the boundaries also matter.
Edit: Read the comments. Lost the initial respect for Stephanie as it’s the same political dominance crap I argue against in these kinds of threads. The contrast between her style/claims and Pieters’ is strong and clear.
Upvoted for this. Without actual decency, a CoC can only make the semblance of decency last for so long.
People disagree vehemently about what it means to be a dick so that guideline is useless.