Additional context: In 2016, I had freshly graduated into big tech in an incredibly frothy job market, and the zeitgeist asked: can we skip the whole four-year university thing, and instead produce new software engineers via coding bootcamp?
This article tries to disclaim answering that question in any deep pedagogical sense. But the job market seems to have answered it for us in the economic sense, with the failure of most (?) coding bootcamps.
On the other hand, it might nonetheless be true that we never really needed to produce computer scientists in large quantities, but rather software engineers.
I recall remarks about how Excel is probably the most successful programming environment, if only you care to classify it as such — but I also recall some paper on how doctors haphazardly armed with Excel produce severe dosage errors for their patients.
So today we might ask: can we produce new software engineers — in some broadly-expanded sense. over a long time horizon — by equipping people with AI-powered Excel?
(Just like in 2016, I will also decline to answer that question.)
I like the spirit of this article, but I’m going to push back on the conclusion it draws.
You do not need to study computer science at a university to learn about computer science. I’m not saying it doesn’t help - certainly attending a set of prepared courses which guides you through all kinds of algorithms, data structures, areas of study etc is very useful for gaining a wide exposure. But nothing - especially in the year 2025 - stops any motivated person with an internet connection from learning exactly the same material, entirely for free.
A really important thing to understand is that not everyone is going to have that motivation, and that’s actually fine. There is a very wide span of software jobs out there, ranging from simple automation scripts, all the way to cutting-edge, highly optimised low-level code running on exotic hardware. Saying that the same person (“a software engineer”) would or should do well at every point in that spectrum is simply a flawed mental model IMO.
None of this is to say that the attitude of “Will I use this in the real world” is good or should be encouraged.
Also, perhaps as a side note: There is a lot of controversy about the term Software Engineer. I don’t care about having a ring from a special organisation, but what I do care about is doing actual engineering. Engineering involves building something that meets a specific set of requirements - some of which may be in tension - and making reasoned tradeoffs. It also involves a bunch of careful measurement, error quantification, and testing. It absolutely does not involve building the “best” version of a solution that achieves 99.9% efficiency when the requirements state 90%. Taken with the example in the article: the planning software felt slow. The questions here should be:
How am I quantifying slow? (this should connect to requirements)
Is this really slow? And under what conditions? (measurement)
How could this be addressed? (designing a solution - this is the thing people normally do, skipping everything else)
What resources would be needed to implement the solution? (people, time, money, collaboration etc)
Does the cost (in all senses) of the solution outweigh the other requirements of the project? (balancing requirements in tension)
Only when all of those are answered fully should you take any kind of action, and you should be ready, as a professional engineer, to accept that you do not build the better solution.
But nothing - especially in the year 2025 - stops any motivated person with an internet connection from learning exactly the same material, entirely for free.
I’ve said this before, but I’m going to keep repeating it:
The point of a university education is not to teach you, it’s to give you a guided tour of your ignorance.
A degree in most fields will not teach to the entire field (a lifetime of study won’t do that either). It will give you enough of an overview of the field that you can learn the bits that are relevant or interesting to you later.
Every individual thing I learned as an undergraduate could now be found online. More than half of it is stuff I wouldn’t have thought to look for.
The point of a university education is not to teach you, it’s to give you a guided tour of your ignorance.
I love that!
Sadly, universities vary a lot, even in the UK. But the best ones (IMO) totally have that ethos.
As a lecturer, I rarely cared whether my students learned any specific thing. My main goals were, in this order:
to motivate them to be very very interested in the subject
to show them how academics think about the subject
(a distant third) for them to learn some useful facts but, even then, I didn’t care whether they were from any particular syllabus (well, except that I had a breadth requirement - they should learn a few things in separate areas)
When I was Director of Studies at a Cambridge college, one of the first things I told my students was that they should have two objectives at university:
Acquire an education.
Pass some exams.
Many of the things that they did would work towards both goals, but it is very important not to confuse the two.
Agreed, and I love the guided tour metaphor. There are many times in my own life where I’ve taken the long way around to get to a well-known(ish) solution to a problem, but that in itself is a blessing and a curse.
I won’t say that [a formal CS education] is necessary [to be an effective software engineer]
and closes with (emphasis added):
For those who hold [the opinion that implementing a CRUD app does not need any understanding of theoretical computer science whatsoever] — I wish you luck implementing your next meeting tool.
So it was never intended to conclude that a software engineer needs formal or university-based CS education.
That being said, the article was my first published blog post, and the rhetoric is muddy. (Perhaps my rhetoric today is not much better 🙃.)
It was mostly intended as a fun anecdote, something a teacher could give their students when they complain about the applicability of NP-completeness to “real” software engineering.
FYI: the line you quote here comes off as positively dripping with sneering contempt for people who implement software you don’t think is particularly complex or important.
So I’ll provide a bit of free education you apparently failed to get from your university: there are difficult and complex things out there in the world whose difficulty and complexity you are completely ignorant of. However much time you’ve spent studying graph algorithms at university, for example, if you’d instead spent on studying time zones and calendaring systems, would probably still be enough only to allow you to implement a bad, mostly wrong meeting tool. The fact that it’s been made easy enough for you to sneer at speaks to mountains of knowledge and sweat and tears and toil of which you know nothing, and you should learn to show respect for it.
The matter of tone in the article is a silly affair on the whole. It’s certainly overly dramatic. One could reasonably argue that it’s contemptuous.
However, you seem to be attributing viewpoints to me which I have been explicitly attributing to others. Here, I made up a straw-man counterparty who says that theoretical computer science is literally useless (that is, not required “whatsoever”) because they think that software engineering is easy.
I don’t think this. In fact, this person likely doesn’t exist at all. That’s a severe problem in the argumentation, and it means that the article is mostly useless from a persuasive standpoint (except for the single anecdote itself, as a weak form of empirical evidence). But it’s not intended as a reflection of my own judgment.
By itself, I would have assumed that it’s because the article is written in an overly-dramatic style, and it was foolish to expect that wording to be clear. I would not have written the article in the same way today, if at all.
But then you did the same thing in your reply to my comment here. I remarked on the “job market” as an actor making a judgment in an exclusively “economic” sense, but your comment implies that I myself am making those judgments.
It’s frustrating for me to be understood in the exact opposite way of what I intended, particularly if it hasn’t gotten any clearer in the eight years since this article was written. From broader experience, I’m inclined to think it’s indeed a deficit in my own communication style or ability.
What do you think? If you or other readers have feedback on my present-day communication, I’d be interested to hear at me@waleedkhan.name.
Some ideas:
My attempts at explicit disclaimers are insufficient. Or perhaps it’s rhetorically impractical to raise others’ viewpoints without implicitly endorsing them.
There are concrete syntactic or semantic ambiguities or issues. (“seems to have answered it for us” could be an issue.)
l have different semantic valuations of the same phrases or word choice than the typical English speaker.
Some other complete blind spot — I would not be surprised to learn of its existence.
You’re right, your opening should have been disclaimer against such a reply - so apologies if it came across as too um-actually. I enjoyed the article, and I love when folks find links and make connections over different areas of computer science. Not to hammer too hard on it, but my main point was really that some people will be curious and motivated to seek out those types of solutions, and some folks will push against it as a defense against perceived complexity, almost as a knee-jerk response. I think that having a mix of both types (maybe not in equal proportions) can be a good thing when building something in a team.
But the job market seems to have answered it for us in the economic sense, with the failure of most (?) coding bootcamps.
Counterpoint: the original motivation for FizzBuzz was that, allegedly, “the job market” had discovered people were coming out of university Computer Science degree programs unable to figure out how to write even very elementary computer programs.
Do you, then, also assert that those degree programs deserve to fail?
I’m sure I could’ve learned a lot of things in advance if I went to university, but another few years of mental pain and stress caused by our education system were absolutely not worth it.
Being able to learn these things on my own, at my pace, feels a lot more fun and rewarding than having to cram them for exams that are forced upon my throat, and then be graded with a number that doesn’t reflect any actual skills or knowledge.
I’m sure it looks different in other countries. But for me, living in Poland, the faults of the education system that scarred me through years of going to school were enough to make me not want to touch school ever again. Even if people say universities are better than that.
I can’t speak for Poland, but universities in the UK are nothing like schools. There is no one in a university whose title is teacher. They are places you go to learn and there are people who will help you learn (and, especially, point you at things that are not part of the formal curriculum but might be interesting to you) but the responsibility for learning is entirely yours.
I’ve heard good things about universities in the UK, but as a student raised in a working-class family I couldn’t imagine moving to a different country to study abroad. So my options were purely domestic, where unfortunately universities are what they are.
For context, Poland’s education system is very exam-heavy and metric-oriented. This perpetrates higher education spheres as well. I guess you could even call it an education crisis.
To my knowledge this happened because formal education titles were perverted into being something you must have in your CV/resume to be considered credible. A lot of folks from older generations still believe that to be the case.
I don’t know where this belief came from, but I think you can imagine how it negatively affected the quality of higher education here—therefore also negatively affecting the quality of schools, because you now have professors who completed university just for the title, raising teachers who are also completing university just for the title. (Who knew that bad education had such far-reaching consequences.)
So the system is full of people with no interest in teaching, doing it only for… I don’t know what, money? with no pedagogic competencies whatsoever. Exams are everywhere because people with no interest in education are just doing what their teachers did, without any will to change the status quo for the better.
I guess that just makes universities abroad look that much better, huh.
Makes me think that higher education might be an interesting avenue to explore in life, maybe, if I ever feel adventurous. Thank you for giving me pause about this topic!
Yeah, they are nothing like any Eastern European university :-). Many – especially second-tier universities – are still pretty firmly anchored in their 1950s Soviet mold from which they were either cast or into which they were shoved if they predated it.
I haven’t studied abroad (some of those roots meant that even Erasmus mobility was kind of tricky back when I went to school) but, ironically enough, I’ve worked with people who taught at UK and US universities, including during my undergrad studies. At least for technical higher education (so EE/CompEng – I can’t speak for “purer” CS and Math degrees) they couldn’t be more different.
There are exceptions everywhere, and you should adjust this for the fact that uni was 15+ years ago for me, but by and large some notable differences include:
Unless you’re an extremely gifted student there is no such thing as “pointing you at things that are not part of the formal curriculum”. The formal curriculum is where education ends. It’s less common nowadays but back when I went to school, if you used something that wasn’t in the formal curriculum to solve an exam question or a homework assignment, there was a pretty good chance that you’d get no points for it. I went to a pretty “liberal” technical university; in more uptight settings there was actually a pretty good chance you’d fail that exam for something like that.
Research grants have changed this to some degree for first-tier universities, but most of them are still ran as if their primary activity is teaching, not research. You can go for years without publishing anything substantial, and there’s a whole system in place to enable (at least some) universities to keep staff which can continue to teach and not perish despite failing to publish anything, or not publishing anything that meets minimal academic standards.
Generally, there are very few TAs. When I went to school it wasn’t uncommon to have a single TA assigned to 30, 60, or even 120 students. Some CompEng departments try to fix this to some degree but their means are rather questionable (e.g. third year students teaching second year labs). Consequently, most individual study activity consists of solving homework assignments issued to everyone and pondering over the grades you get 1-3 weeks later.
At least back when I went to school, there were maybe two or three people in the three departments herding us EE students who had office hours. Technically, they were all supposed to have office hours; in practice, due to how teaching was structured, the office hours were practically useless so nobody bothered with them, teachers or students.
The responsibility for learning is obviously primarily the students’ (they’re adults, after all, at this point forcing them to study is not going to work) but they have very little autonomy in it. Lab and homework assignments are usually very strict. In some cases (e.g. some CompEng or more niche final-year disciplines) undergrad semester projects are a little less strict, but for most technical degrees they’re basically large fill-in-the-blanks leaflets.
Ironically enough, this is one reason why their graduates tend to do pretty well in leetcode-style challenges (and their fortunately far less popular equivalents for EE). The whole undergrad program, including curricula, is designed along very similar lines.
Edit: that being said, I want to emphasise that, if you’re a creative student :-) you can put any university education to very good use and, indeed, it’s primarily useful as a guided tour of your ignorance.
Not that I recommend it in general, but some of the things I did included:
The few overworked TAs were usually PhD students, which meant that there was always room for undergrads in research programs – usually in an unofficial capacity but I didn’t care as long as I could get some interesting work done.
I always studied around the formal curricula. If I didn’t care for, or if I just didn’t like a particular course, I’d trade it for studying something I was really interested in (including by going to lectures – presence was more or less compulsory – but not paying attention and studying something else altogether). That meant I’d get some bad grades, absolutely, but the art of not giving a fuck is something you need to master very early if you’re to survive even primary school.
Since a lot of courses or exams were either structured around or easy to pass by rote, I would use what I learned in class as a guide or support to learn from other sources (from textbooks to speaking to actual EEs with expertise in that field) and postpone learning anything exam-related until the night before, so that I could just regurgitate it on paper the day after.
Since no one really did office hours but there was room in the teachers’ schedule for it, if I found a course particularly interesting and tried to go for it, there was usually a non-zero chance I would be able to spend a few hours talking about various topics 1:1 with someone who’d studied them for decades. They’d usually be limited to the formal curricula but that was hardly a problem.
The whole university system is rid with bullshit rules that are unfair and make absolutely no sense whatsoever, some of them enacted and enforced by bitter people who resent anyone they work with (including students) but have no choice about working with them. Navigating a system like that is a very useful real-life skill. After four years of that, working in the worst multinational corporate environment you can imagine is a breeze. It sucks just as much but you get paid for it.
Even 15 years after finishing uni I still find myself regularly reaching out for things I learned back then, and not as in “I learned how to do this specific thing in $class” but in terms of “this belongs to the same general area of things I studied in $class”. Most of the individual things I learned after my second year or so are probably obsolete by now, what stuck – and I couldn’t do without – is the structured, evidence-based, result-driven approach to engineering and the formal vocabulary to enable it.
The question “Will I ever use this in the real world?” is a testament to a staggering amount of intellectual laziness.
I use pretty much everything that I learned in CS either directly or indirectly. Pretty much all of it has been beneficial except for a couple of odd ball courses. The maths seems useless but it moulds your brain into a super powered problem solving machine. And don’t even get me started on the extra-curriculars, the professional skills courses and just the grit you learn when finishing the damn thing.
The question “Will I ever use this in the real world?” is a testament to a staggering amount of intellectual laziness.
Not really. When the thing I’m learning directly enables one of my projects, I am way more motivated and eager to learn. For example: Newtonian physics is pretty dry and boring, but if you want to write a game engine with objects that move and bounce off each other, suddenly it’s all engaging and fun.
Pedagogy is not just about squeezing as much knowledge as possible into the human brain, it has to be engaging and exciting. Motivation is a key part of learning. It is nearly impossible to remember information that is perceived as useless.
There were a few classes that were very proofs-oriented like the algorithms class I took, I didn’t like proofs then and I don’t think I’ve used any of that since.
I do really agree with @david_chisnall comment about “a tour of your ignorance”. I’m happy to come out of that proofs class knowing “whelp there’s a ton of very precisely bounded graphs algorithms that I didn’t really absorb” so I can go learn stuff in that category if the topic does come up in practice.
This happened to me in the mid-1990s. I was doing some contract work with a bank. I had yet to graduate Comp Sci with an undergrad degree (which I never did). I was working with someone finishing up their Masters of Comp Sci (and had finished their compilers course). We were tasked with converting a workbook of 100+ pages into a web site. As it was written with Microsoft Word, we had the files that had been “converted” to HTML by Microsoft Word, but the manager wanted each word in the main text that is defined in a glossary to be a link to the definition (“Hey! It’s hypertext! Let’s take advantage of it!”). Around 50+ terms. In a 100+ page document. That was the task.
My partner immediately jumps to hand edit each page. I had to argue for half an hour to stop with that silliness and let’s see if there was an easier way to accomplish the goal. There was—we had access to yacc. It was an easy task then write some yacc to recognize the 50+ terms and do a substitution (generally, that’s like an introduction to using the tool) which took maybe half an hour, then take the resulting program and run it over each of the 100+ HTML pages. Total time—a little over an hour. Did I fail to mention that my partner had to use yacc for their compiler course? That I only had a brief exposure to yacc?
The counter to this: we were contractors, getting paid by the hour. My partner wanted to spend hours doing mindless work. Who was the one being stupid here? Me, for finishing up in an hour? Or my partner, wanting to get paid for hours of mindless work? I’m reminded of the Upton Sinclair quote: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
Throughout my career it became very clear that titles from BSc to PhD (or the absence of such) are very bad indicators of whether you have a grasp on even basic theoretical concepts.
I always was in the “it’s good to at least have heard about it” camp, but I still feel baffled by how many people finish degrees with either not having heard about these things or completely forgetting about it.
So I think the article is right about the “good to know about it” part but I don’t think that knowing about it has a lot to do with it being in a curriculum. Because whether you’ll know about it in your job and are able to apply it appears to correlate more with other factors.
And I am talking purely of (what at least I consider to be) fundamentals, so things that would in my experience also be part of for example cross-domain studies.
But this isn’t too different from how people watch documentaries, which in content is below for example junior high school education. People seem to forget most of their education and in fact really understanding things in a way that you feel confident in applying it is rarely ever of any importance. If you train people to be good at tests they will be good at tests. This is so often the big problem in hiring processes as well.
Please don’t understand this as something against school or education. I think it’s important to at least get quality content so to speak, but from everything I’ve seen so far, the assumption that the school curriculum has large effect on what people at large know or don’t know seems to be wrong.
And just to be clear: Bootcamps, etc. in my opinion are even worse for these and even more reasons.
I agree that the article employed poor rhetoric. I didn’t really understand what I wanted to argue, if anything at all. And I agree the main value is the fun anecdote (see my comment here).
I think you’re being a little hard on yourself. It doesn’t sound like it now but ten years from now you’ll look back on this blog post and find that your words failed you in that you were trying to get at something a little deeper than you could articulate in an anecdote.
Being able to reach out for the right theoretical tool is just the first layer, and the most superficial form, of instrumentalising engineering education. I’m not trying to minimise its importance, it’s just not the final form of this skill yet :).
The bigger deal, which your anecdote captures only indirectly, is the ability to formulate a viable, formal model for an informal problem whose vital features aren’t immediately obvious. Formal CS education is not the only way to hone this skill; but it helps, and it gives you very adequate tools for it. The real skill isn’t recognising that a real world problem sounds suspiciously like a theoretical CS problem. The real skill is formulating a theoretical CS problem that’s suspiciously like a real-world problem.
That’s why “will I ever use this in the real life” is the wrong way to look at it. You spend, what, four years in university, maybe a few more with masters. If six years are all it takes to learn everything you’ll need to apply for the next fourty years of your career modulo things like the version control tool du jour, that’s going to be the most boring career ever. Quit while you still can!
Those four years are valuable as a tour of what lays ahead – what you don’t know, and you’ll spend the next fourty years of your life trying to figure out.
There is an inherent risk that a good chunk of what you’ll learn in uni is going to be useless. That’s true for any educational format. I thought about half of what I learned was going to be completely useless – I was mostly right about the percentage but holy shit did I get the distribution completely wrong.
My recollection is that big company scheduling was often unsatisfiable due to 1) limited amounts of specialized roome/resources and 2) certain individuals with huge meeting loads. It was easy to get into a situation where you needed an 8-person conference room and there was nothing available that could meet the least-available person’s schedule, given the existing global constraints.
People also tend to have minimally-flexible constraints like “needing to eat mid-day” and “not working all day”, which cuts into the solution space significantly.
So: for local scheduling, you probably had to use an optimization approach rather than a strict constraint-solving approach; and I imagine global constraint satisfaction wouldn’t scale due to the sheer size.
Back then, I think this would have run in either Javascript or Hack, probably neither of which had any readily-available constraint-solving libraries anyways 🙂.
To be honest, my experience with constraint solvers is casual Z3 usage:
So you might incidentally be as knowledgeable as I am 🙂.
In my last comment, I mainly wanted to point out the common unsatisfiable case for calendar scheduling in practice, separately from the theory of graph coloring.
I don’t know your background, and it wasn’t obvious to me until I joined a large organization and tried to schedule with people who have like 30 meetings/week.
It’s not apparent to me if other graph coloring problems have the same profiles in practice. For example, does register allocation usually deal with heterogenously-sized registers?
For constraint-solving vs optimization:
Z3 can optimize stuff these days (see Arithmetical Optimization), although I don’t think optimization is typically considered part of SMT.
You might be aware, but one “optimization” strategy via SMT solver is to try to solve all the constraints, and then remove constraints until you get a solution that satisfies a lot of constraints.
I recently learned that Z3 lets you push and pop constraints, but not remove arbitrary ones, as it fundamentally relies on backtracking.
This approach only uses “number of constraints” as the scoring function, and greedily adding/removing constraints is not guaranteed to give you a globally-optimal solution anyways.
So SMT solvers might not be a good fit for general optimization problems, since early bad choices may be difficult to recover from?
I’m not familiar with constraint-solving in practice, but it’s surprising to me that e.g. vehicle routing would operate primarily using constraint-solving. How do you recover if there’s no satisfying solution?
I’m not familiar with constraint-solving in practice, but it’s surprising to me that e.g. vehicle routing would operate primarily using constraint-solving. How do you recover if there’s no satisfying solution?
Oh to clarify I meant vehicle routing as a logistics problem: given a fleet of N trucks, how do you move all resources from sources to sinks? Not as in realtime autonomous vehicle or GPS routing,
I’d hypothesize that the difference between “graph coloring scheduling” and “constraint solving scheduling” is whether most solutions are invalid and the goal is to find a valid solution, or if most solutions are valid and the goal is to find an optimal solution. Compare your problem to scheduling a conference: most speakers can go in most slots, but you want to find the schedule that maximizes some metric (like spreading multiple talks on the same general topic across the day).
Idk, I dont have a CS degree and if I saw that paper I would look it up, try (and likely fail) to understand what it is for, then ask Dave why they needed it and how it’s helpful.
Additional context: In 2016, I had freshly graduated into big tech in an incredibly frothy job market, and the zeitgeist asked: can we skip the whole four-year university thing, and instead produce new software engineers via coding bootcamp?
This article tries to disclaim answering that question in any deep pedagogical sense. But the job market seems to have answered it for us in the economic sense, with the failure of most (?) coding bootcamps.
On the other hand, it might nonetheless be true that we never really needed to produce computer scientists in large quantities, but rather software engineers.
I recall remarks about how Excel is probably the most successful programming environment, if only you care to classify it as such — but I also recall some paper on how doctors haphazardly armed with Excel produce severe dosage errors for their patients.
So today we might ask: can we produce new software engineers — in some broadly-expanded sense. over a long time horizon — by equipping people with AI-powered Excel?
(Just like in 2016, I will also decline to answer that question.)
I like the spirit of this article, but I’m going to push back on the conclusion it draws.
You do not need to study computer science at a university to learn about computer science. I’m not saying it doesn’t help - certainly attending a set of prepared courses which guides you through all kinds of algorithms, data structures, areas of study etc is very useful for gaining a wide exposure. But nothing - especially in the year 2025 - stops any motivated person with an internet connection from learning exactly the same material, entirely for free.
A really important thing to understand is that not everyone is going to have that motivation, and that’s actually fine. There is a very wide span of software jobs out there, ranging from simple automation scripts, all the way to cutting-edge, highly optimised low-level code running on exotic hardware. Saying that the same person (“a software engineer”) would or should do well at every point in that spectrum is simply a flawed mental model IMO.
None of this is to say that the attitude of “Will I use this in the real world” is good or should be encouraged.
Also, perhaps as a side note: There is a lot of controversy about the term Software Engineer. I don’t care about having a ring from a special organisation, but what I do care about is doing actual engineering. Engineering involves building something that meets a specific set of requirements - some of which may be in tension - and making reasoned tradeoffs. It also involves a bunch of careful measurement, error quantification, and testing. It absolutely does not involve building the “best” version of a solution that achieves 99.9% efficiency when the requirements state 90%. Taken with the example in the article: the planning software felt slow. The questions here should be:
Only when all of those are answered fully should you take any kind of action, and you should be ready, as a professional engineer, to accept that you do not build the better solution.
I’ve said this before, but I’m going to keep repeating it:
The point of a university education is not to teach you, it’s to give you a guided tour of your ignorance.
A degree in most fields will not teach to the entire field (a lifetime of study won’t do that either). It will give you enough of an overview of the field that you can learn the bits that are relevant or interesting to you later.
Every individual thing I learned as an undergraduate could now be found online. More than half of it is stuff I wouldn’t have thought to look for.
I love that!
Sadly, universities vary a lot, even in the UK. But the best ones (IMO) totally have that ethos.
As a lecturer, I rarely cared whether my students learned any specific thing. My main goals were, in this order:
When I was Director of Studies at a Cambridge college, one of the first things I told my students was that they should have two objectives at university:
Many of the things that they did would work towards both goals, but it is very important not to confuse the two.
Agreed, and I love the guided tour metaphor. There are many times in my own life where I’ve taken the long way around to get to a well-known(ish) solution to a problem, but that in itself is a blessing and a curse.
The article opens with:
and closes with (emphasis added):
So it was never intended to conclude that a software engineer needs formal or university-based CS education.
That being said, the article was my first published blog post, and the rhetoric is muddy. (Perhaps my rhetoric today is not much better 🙃.)
It was mostly intended as a fun anecdote, something a teacher could give their students when they complain about the applicability of NP-completeness to “real” software engineering.
FYI: the line you quote here comes off as positively dripping with sneering contempt for people who implement software you don’t think is particularly complex or important.
So I’ll provide a bit of free education you apparently failed to get from your university: there are difficult and complex things out there in the world whose difficulty and complexity you are completely ignorant of. However much time you’ve spent studying graph algorithms at university, for example, if you’d instead spent on studying time zones and calendaring systems, would probably still be enough only to allow you to implement a bad, mostly wrong meeting tool. The fact that it’s been made easy enough for you to sneer at speaks to mountains of knowledge and sweat and tears and toil of which you know nothing, and you should learn to show respect for it.
The matter of tone in the article is a silly affair on the whole. It’s certainly overly dramatic. One could reasonably argue that it’s contemptuous.
However, you seem to be attributing viewpoints to me which I have been explicitly attributing to others. Here, I made up a straw-man counterparty who says that theoretical computer science is literally useless (that is, not required “whatsoever”) because they think that software engineering is easy.
I don’t think this. In fact, this person likely doesn’t exist at all. That’s a severe problem in the argumentation, and it means that the article is mostly useless from a persuasive standpoint (except for the single anecdote itself, as a weak form of empirical evidence). But it’s not intended as a reflection of my own judgment.
By itself, I would have assumed that it’s because the article is written in an overly-dramatic style, and it was foolish to expect that wording to be clear. I would not have written the article in the same way today, if at all.
But then you did the same thing in your reply to my comment here. I remarked on the “job market” as an actor making a judgment in an exclusively “economic” sense, but your comment implies that I myself am making those judgments.
It’s frustrating for me to be understood in the exact opposite way of what I intended, particularly if it hasn’t gotten any clearer in the eight years since this article was written. From broader experience, I’m inclined to think it’s indeed a deficit in my own communication style or ability.
What do you think? If you or other readers have feedback on my present-day communication, I’d be interested to hear at me@waleedkhan.name.
Some ideas:
You’re right, your opening should have been disclaimer against such a reply - so apologies if it came across as too um-actually. I enjoyed the article, and I love when folks find links and make connections over different areas of computer science. Not to hammer too hard on it, but my main point was really that some people will be curious and motivated to seek out those types of solutions, and some folks will push against it as a defense against perceived complexity, almost as a knee-jerk response. I think that having a mix of both types (maybe not in equal proportions) can be a good thing when building something in a team.
Counterpoint: the original motivation for FizzBuzz was that, allegedly, “the job market” had discovered people were coming out of university Computer Science degree programs unable to figure out how to write even very elementary computer programs.
Do you, then, also assert that those degree programs deserve to fail?
Replied here
I’m sure I could’ve learned a lot of things in advance if I went to university, but another few years of mental pain and stress caused by our education system were absolutely not worth it.
Being able to learn these things on my own, at my pace, feels a lot more fun and rewarding than having to cram them for exams that are forced upon my throat, and then be graded with a number that doesn’t reflect any actual skills or knowledge.
I’m sure it looks different in other countries. But for me, living in Poland, the faults of the education system that scarred me through years of going to school were enough to make me not want to touch school ever again. Even if people say universities are better than that.
I can’t speak for Poland, but universities in the UK are nothing like schools. There is no one in a university whose title is teacher. They are places you go to learn and there are people who will help you learn (and, especially, point you at things that are not part of the formal curriculum but might be interesting to you) but the responsibility for learning is entirely yours.
I’ve heard good things about universities in the UK, but as a student raised in a working-class family I couldn’t imagine moving to a different country to study abroad. So my options were purely domestic, where unfortunately universities are what they are.
For context, Poland’s education system is very exam-heavy and metric-oriented. This perpetrates higher education spheres as well. I guess you could even call it an education crisis.
To my knowledge this happened because formal education titles were perverted into being something you must have in your CV/resume to be considered credible. A lot of folks from older generations still believe that to be the case.
I don’t know where this belief came from, but I think you can imagine how it negatively affected the quality of higher education here—therefore also negatively affecting the quality of schools, because you now have professors who completed university just for the title, raising teachers who are also completing university just for the title. (Who knew that bad education had such far-reaching consequences.)
So the system is full of people with no interest in teaching, doing it only for… I don’t know what, money? with no pedagogic competencies whatsoever. Exams are everywhere because people with no interest in education are just doing what their teachers did, without any will to change the status quo for the better.
I guess that just makes universities abroad look that much better, huh.
Makes me think that higher education might be an interesting avenue to explore in life, maybe, if I ever feel adventurous. Thank you for giving me pause about this topic!
Yeah, they are nothing like any Eastern European university :-). Many – especially second-tier universities – are still pretty firmly anchored in their 1950s Soviet mold from which they were either cast or into which they were shoved if they predated it.
I haven’t studied abroad (some of those roots meant that even Erasmus mobility was kind of tricky back when I went to school) but, ironically enough, I’ve worked with people who taught at UK and US universities, including during my undergrad studies. At least for technical higher education (so EE/CompEng – I can’t speak for “purer” CS and Math degrees) they couldn’t be more different.
There are exceptions everywhere, and you should adjust this for the fact that uni was 15+ years ago for me, but by and large some notable differences include:
Ironically enough, this is one reason why their graduates tend to do pretty well in leetcode-style challenges (and their fortunately far less popular equivalents for EE). The whole undergrad program, including curricula, is designed along very similar lines.
Edit: that being said, I want to emphasise that, if you’re a creative student :-) you can put any university education to very good use and, indeed, it’s primarily useful as a guided tour of your ignorance.
Not that I recommend it in general, but some of the things I did included:
Even 15 years after finishing uni I still find myself regularly reaching out for things I learned back then, and not as in “I learned how to do this specific thing in $class” but in terms of “this belongs to the same general area of things I studied in $class”. Most of the individual things I learned after my second year or so are probably obsolete by now, what stuck – and I couldn’t do without – is the structured, evidence-based, result-driven approach to engineering and the formal vocabulary to enable it.
The question “Will I ever use this in the real world?” is a testament to a staggering amount of intellectual laziness.
I use pretty much everything that I learned in CS either directly or indirectly. Pretty much all of it has been beneficial except for a couple of odd ball courses. The maths seems useless but it moulds your brain into a super powered problem solving machine. And don’t even get me started on the extra-curriculars, the professional skills courses and just the grit you learn when finishing the damn thing.
Not really. When the thing I’m learning directly enables one of my projects, I am way more motivated and eager to learn. For example: Newtonian physics is pretty dry and boring, but if you want to write a game engine with objects that move and bounce off each other, suddenly it’s all engaging and fun.
Pedagogy is not just about squeezing as much knowledge as possible into the human brain, it has to be engaging and exciting. Motivation is a key part of learning. It is nearly impossible to remember information that is perceived as useless.
Sure and courses in things like Analysis fail to do this. Could they be better? Maybe.
But they’re required to be able to do the real things you talk about, so most universities just tell you to bite through the sour apple.
There were a few classes that were very proofs-oriented like the algorithms class I took, I didn’t like proofs then and I don’t think I’ve used any of that since.
I do really agree with @david_chisnall comment about “a tour of your ignorance”. I’m happy to come out of that proofs class knowing “whelp there’s a ton of very precisely bounded graphs algorithms that I didn’t really absorb” so I can go learn stuff in that category if the topic does come up in practice.
This happened to me in the mid-1990s. I was doing some contract work with a bank. I had yet to graduate Comp Sci with an undergrad degree (which I never did). I was working with someone finishing up their Masters of Comp Sci (and had finished their compilers course). We were tasked with converting a workbook of 100+ pages into a web site. As it was written with Microsoft Word, we had the files that had been “converted” to HTML by Microsoft Word, but the manager wanted each word in the main text that is defined in a glossary to be a link to the definition (“Hey! It’s hypertext! Let’s take advantage of it!”). Around 50+ terms. In a 100+ page document. That was the task.
My partner immediately jumps to hand edit each page. I had to argue for half an hour to stop with that silliness and let’s see if there was an easier way to accomplish the goal. There was—we had access to
yacc. It was an easy task then write someyaccto recognize the 50+ terms and do a substitution (generally, that’s like an introduction to using the tool) which took maybe half an hour, then take the resulting program and run it over each of the 100+ HTML pages. Total time—a little over an hour. Did I fail to mention that my partner had to useyaccfor their compiler course? That I only had a brief exposure toyacc?The counter to this: we were contractors, getting paid by the hour. My partner wanted to spend hours doing mindless work. Who was the one being stupid here? Me, for finishing up in an hour? Or my partner, wanting to get paid for hours of mindless work? I’m reminded of the Upton Sinclair quote: “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”
Throughout my career it became very clear that titles from BSc to PhD (or the absence of such) are very bad indicators of whether you have a grasp on even basic theoretical concepts.
I always was in the “it’s good to at least have heard about it” camp, but I still feel baffled by how many people finish degrees with either not having heard about these things or completely forgetting about it.
So I think the article is right about the “good to know about it” part but I don’t think that knowing about it has a lot to do with it being in a curriculum. Because whether you’ll know about it in your job and are able to apply it appears to correlate more with other factors.
And I am talking purely of (what at least I consider to be) fundamentals, so things that would in my experience also be part of for example cross-domain studies.
But this isn’t too different from how people watch documentaries, which in content is below for example junior high school education. People seem to forget most of their education and in fact really understanding things in a way that you feel confident in applying it is rarely ever of any importance. If you train people to be good at tests they will be good at tests. This is so often the big problem in hiring processes as well.
Please don’t understand this as something against school or education. I think it’s important to at least get quality content so to speak, but from everything I’ve seen so far, the assumption that the school curriculum has large effect on what people at large know or don’t know seems to be wrong.
And just to be clear: Bootcamps, etc. in my opinion are even worse for these and even more reasons.
I enjoyed the article but this sentence seems out-of-place:
The main takeaway from the anecdotes seems to be that “real-world problem X is actually very similar to theoretical CS problem Y”.
I agree that the article employed poor rhetoric. I didn’t really understand what I wanted to argue, if anything at all. And I agree the main value is the fun anecdote (see my comment here).
I think you’re being a little hard on yourself. It doesn’t sound like it now but ten years from now you’ll look back on this blog post and find that your words failed you in that you were trying to get at something a little deeper than you could articulate in an anecdote.
Being able to reach out for the right theoretical tool is just the first layer, and the most superficial form, of instrumentalising engineering education. I’m not trying to minimise its importance, it’s just not the final form of this skill yet :).
The bigger deal, which your anecdote captures only indirectly, is the ability to formulate a viable, formal model for an informal problem whose vital features aren’t immediately obvious. Formal CS education is not the only way to hone this skill; but it helps, and it gives you very adequate tools for it. The real skill isn’t recognising that a real world problem sounds suspiciously like a theoretical CS problem. The real skill is formulating a theoretical CS problem that’s suspiciously like a real-world problem.
That’s why “will I ever use this in the real life” is the wrong way to look at it. You spend, what, four years in university, maybe a few more with masters. If six years are all it takes to learn everything you’ll need to apply for the next fourty years of your career modulo things like the version control tool du jour, that’s going to be the most boring career ever. Quit while you still can!
Those four years are valuable as a tour of what lays ahead – what you don’t know, and you’ll spend the next fourty years of your life trying to figure out.
There is an inherent risk that a good chunk of what you’ll learn in uni is going to be useless. That’s true for any educational format. I thought about half of what I learned was going to be completely useless – I was mostly right about the percentage but holy shit did I get the distribution completely wrong.
How does modeling scheduling as a graph coloring problem compare to solving it with a dedicated constraint solver?
My recollection is that big company scheduling was often unsatisfiable due to 1) limited amounts of specialized roome/resources and 2) certain individuals with huge meeting loads. It was easy to get into a situation where you needed an 8-person conference room and there was nothing available that could meet the least-available person’s schedule, given the existing global constraints.
People also tend to have minimally-flexible constraints like “needing to eat mid-day” and “not working all day”, which cuts into the solution space significantly.
So: for local scheduling, you probably had to use an optimization approach rather than a strict constraint-solving approach; and I imagine global constraint satisfaction wouldn’t scale due to the sheer size.
Back then, I think this would have run in either Javascript or Hack, probably neither of which had any readily-available constraint-solving libraries anyways 🙂.
Oh hey, you’re the author! Liked the post =) I was mostly curious because I vaguely know two things about constraint solvers:
Is there a reason for (2) besides lack of time/resources?
To be honest, my experience with constraint solvers is casual Z3 usage:
For constraint-solving vs optimization:
I’m not familiar with constraint-solving in practice, but it’s surprising to me that e.g. vehicle routing would operate primarily using constraint-solving. How do you recover if there’s no satisfying solution?
Oh to clarify I meant vehicle routing as a logistics problem: given a fleet of N trucks, how do you move all resources from sources to sinks? Not as in realtime autonomous vehicle or GPS routing,
I’d hypothesize that the difference between “graph coloring scheduling” and “constraint solving scheduling” is whether most solutions are invalid and the goal is to find a valid solution, or if most solutions are valid and the goal is to find an optimal solution. Compare your problem to scheduling a conference: most speakers can go in most slots, but you want to find the schedule that maximizes some metric (like spreading multiple talks on the same general topic across the day).
Idk, I dont have a CS degree and if I saw that paper I would look it up, try (and likely fail) to understand what it is for, then ask Dave why they needed it and how it’s helpful.