The whole bit about subsidiarity is interesting, but I want to highlight the few bits about PGP:
The thing is, PGP basically sucks. It’s really hard to use and even harder to use well. In fact, PGP is so creaky that a lot of people just pretend it doesn’t exist.
to a first approximation, no one uses PGP
PGP, remember, is 30 years old, and dramatically under-resourced.
Maybe we have to get rid of PGP and start over.
I don’t really see why he comes to the conclusion that we need to throw away 30 years of work and the existing web of trust for… something new that needs tons of investment (?). Maybe someone more familiar with Doctorow’s work can shed some light on this attitude.
The problem isn’t Doctorow, the problem is PGP. For example, here’s the Signal blog discussing requirements, explaining that PGP is an “architectural dead end”. https://www.signal.org/blog/asynchronous-security/
Or this: https://arstechnica.com/information-technology/2016/12/op-ed-im-giving-up-on-pgp/
More on why PGP is bad and should go away
https://caniuse.com is useful because it is a large(ish) database of web platform features (186 at the moment) and their precise availability. The fact that a) caniphp does not contain any stats about proliferation of PHP versions, and b) the existing PHP documentation (example: Fibers) already has information about which PHP versions support the feature, makes me doubt this will be used by many developers.
If you use PHPStorm or other IntelliJ IDEs, you can set the version of PHP you target for your project already, and it will tell you which features are not available by marking them red in your code. No switch to the browser necessary.
I share the author’s desire for CS departments to get off the fence. On the one side is theory and lots of math. On the other is learning mainstream languages, tools, and evolving industry practices for popular fields in the software industry. Neither side is inherently right, but they strike me as somewhat incompatible in how they should be taught and staffed.
When I studied CS in the early 2000s, my department exhibited a similar ambivalence and inconstancy about catering to industry needs described in this article. Looking at it from the faculty’s perspective, I can understand. Catering was useful only insofar as it made the department appear current and relevant from the outside. But it did not fundamentally align with what most of the faculty was interested in. The resulting instruction was therefore something of a wash. On the one hand, the department head bought into the Java hype; hired a decent instructor for their burgeoning intro algorithms class; and added a software engineering methodology class to the curriculum. On the other, the rollout of Java across the curriculum was spotty (many professors stuck with C); none of the full time faculty wanted to teach the software methodology class, so it was taught by a disinterested adjunct from a textbook deeply and unapologetically steeped in waterfall; and code structure and version control in all classes was an afterthought if it was thought of at all.
Many professors were openly wistful of the days when CS was just a specialty in the math department and resented being lumped in with the School of Engineering to which they did not feel they belonged. A few professors expressed the opposite sentiment and treated the major like it was a trade school degree. Their focus on keeping up with industry on the instruction side, however, was no guarantee of grants on the research side, which what the department needed from its tenure track members. In hindsight, it seemed like just about every other department I encountered at my college had a clearer idea of what it was there for than mine. Maybe the grass is just greener, but if I’d known my department had such mixed feelings, I might have picked a different major and learned how to write software on my own time. I’ve had bosses and colleagues who are excellent software engineers who majored in everything from biology to music.
I’m curious if there are any Germans here. I’ve heard that there is less stigma and class distinction associated with trade schools that focus on practical skills there than in countries like the US. My information comes from my memory of a German language textbook that was already dated when I studied it, so please forgive my naïveté. Is that still true? Does that apply to software engineering? If so, are software trade schools effective?
I’m curious if there are any Germans here. I’ve heard that there is less stigma and class distinction associated with trade schools that focus on practical skills there than in countries like the US.
Hiya, German here. We have a three-tiered tertiary education system, in order of increasing academic reputation:
Back when I was studying informatics – what CS is called in Germany – and entered the job market as a working student, it was indeed general knowledge that 90% of the people whose only experience was doing a Bachelor at a university could not code properly. The practical skills were assumed to be inversely proportional to the academic level of education. But, and this is a big but, while people from an apprenticeship background most assuredly have much more experience in the day to day work of programmers, leadership positions still are mostly given to people with the “right” degree.
The advantage people from a more practical background have exists for maybe the first 3 years of a career. After that people from a university background will be preferred by hiring managers. And for some reason it matters what kind of degree you have for the level of compensation you can demand.
First off, I get a fair bit of amusement out of how the Orange Site reacts to these kinds of posts, something about people writing “the author doesn’t have a clue, lol. here’s my 2 cents worth of opinion” is just hilarious. So props for creating yet another one of those.
To give a bit of a background about the author, Michael DeHaan is the author and founder of Ansible. According to his LinkedIn, he has been in the industry since 2001, and teaching CS in some form at North Carolina State University (his alma mater) for the past 5 years.
The points he wants to see more stressed in CS education are mostly about how to design better things and communicate those designs and the ideas embedded within them more effectively. He does not state how to achieve those better designs, just that fewer people today possess the required skills. And he thinks there’s too much math in computer science.
If you only skim the headings then this seems completely reasonable and a good idea ™. But I found myself saying “well, it depends” many times. Like sure, not denormalizing your tables is good design, but if I can reduce my code by a factor of 3 because of it, then why not do it? Or: writing code for other people to read is obviously a good thing, but how do you teach that in the context of a 7-15 week course, where the source is put on the backup drive afterwards, to be forgotten 2 weeks after the last submission?
All in all, this is more of a rant than a source of educational wisdom, but if it gets people thinking and talking about improving CS, it’s a win in my book.
I’ve had almost the exact same experience. Getting up and running is a breeze, and while you stay within the possibilities of the provided examples and built-in components, everything runs smooth and takes virtually no time to implement.
Then you need something more complex, so you install a package and get it working with sometimes huge effort. A few month later you revisit the project and discover that everything is outdated. Google recommends always running the latest flutter version, so you upgrade that pronto. Ups, everything is broken now, as all your dependencies use weird idiosyncrasies of specific Flutter APIs. No problem, you upgrade all your deps too. Oh no, my app, it’s broken, because of course something pre-1.0 can break its contract willy-nilly. And this repeats everytime you stop working on your app for a longer stretch of time.
Back in 2005 or so there were a lot of these, like webOS (not the Palm thing) or lucid desktop. I suppose it’s a nice hobby for the author, they can play around with implementing a windowing system, virtual file systems, etc.
If you think about it for a while, the idea boils down to “yo dawg, I heard you like using your mouse, so I put a window manager in a window in your window manager”. The most hilarious app in these is always the browser in the browser, which breaks almost instantly, since many sites cannot to be in an iframe for security reasons, yeeting you out of your desktop thingy in two clicks.
I saw a big speedup moving a very low-traffic NextCloud install from SQLite to PostgreSQL. I suspect this is because PHP’s one-script-invocation-per-HTTP-request model amplifies any costs of opening and closing a database. With Postgres, this is just opening a UNIX domain socket. With SQLite, this is opening a file, reading indexes, and so on, followed by having to write back everything on close. Putting the database in a separate process works around limitations of the PHP model.
I’d be really curious what the performance of a SQLite daemon process that exposed the SQLite API but kept the in-memory caches live across open / close operations would be in such a scenario.
According to PHP’s docs, the pg extension does some transparent connection pooling by default too, which may have helped.
I suspect this the problem quite a few PHP systems have. They use the database as a session store, log facility, or even an object cache, so almost every request causes at least one database write. SQLite only supports a single writer at a time, so the process might have to wait a bit for a database lock.
12-15 years ago I ran a web forum written in PHP using a SQLite database. It had 40.000 registered users, and 100-300 users browsing it during daytime, sometimes peaking 500+. It never had a single hiccup. I backed it up by simply copying the database file. I think I had a script to put the forum in maintenance mode before proceeding to the backup (which was ready in 1-3 seconds) but I can’t be sure. I tested the backups manually from time to time and they worked.
Frankly, I didn’t need anything else. It was faster than virtually all websites I browse today.
Why would FastCGI help? The PHP programming model is the problem, not the way it communicates with the server. Even in a FastCGI mode, your script still opens the file, still needs to parse the indexes, and still closes and flushes it at the end, you just don’t create a new process each time.
I saw a big speedup moving a very low-traffic NextCloud install from SQLite to PostgreSQL.
That’s good to know. I’ve (reluctantly) installed Nextcloud to see if it could address my requirements for photo syncing, and it’s sluggish performance OOTB was a surprise. That will be the next thing I try. Thanks!
PHP has unions since 8.0: https://www.php.net/manual/en/language.types.declarations.php#language.types.declarations.union
(I realise that one cannot name such a union yet.)
Did you mean something different by “disjoint sum type”?
Yes, “union” are not necessarily disjoint. See the not-very-good wikipedia page where it is called “tagged union”. Basically an enumeration where each case can carry (typed) parameters, not just a name.
If you don’t know of this feature, you are missing a fundamental concept about programming, which should be as primitive and regularly-used as pairs/tuples, but isn’t well-known because many programming languages ignore it completely, which is a shame. You should have a look!
Yep, dynamic languages don’t have the syntax for it, but the use case of course exists, e.g. discriminating with a “kind” or “type” field. Thanks for the pointer though.
Looking more into it, PHP has an RFC for tagged unions: https://wiki.php.net/rfc/tagged_unions and it seems that there is a larger effort to bring more features for algebraic types to PHP: https://wiki.php.net/rfc/adts
Not-directly-related note: Some languages argue that because they already have constructs that enable dispatch-on-case, typically Go interfaces, they “don’t need tagged unions” which are a strict subset of their feature. I understand the argument (in the good cases, the dispatching constructs allow an encoding of typical pattern-matching that is tolerable, although it’s easy to overlook nested matching and other niceties). But I think it misses the point that tagged unions enables static exhaustiveness checking (does this switch/match cover all cases?), which is a huge win usability-wide: when I add a new case, the checker walks me through all the definitions that need to be updated. (Just like adding a new method in a class.) Scala realized this early, encoding algebraic datatypes as sealed classes. (Of course, this argument is moot for languages whose users are averse to static checking of their code.)
I am specifically discussing “disjoint sums” or “tagged unions”, which is not what most of the conversation in the references you gave is about (they spend most of their time discussing difficulties with non-disjoint sums, non-tagged unions, which are well-known to be much harder to design right in theory and also in practice). The topmost reply in the Github issue, on the other hand, is exactly on point:
The past consensus has been that sum types do not add very much to interface types. Once you sort it all out, what you get in the end is an interface type where the compiler checks that you’ve filled in all the cases of a type switch. That’s a fairly small benefit for a new language change.
That sounds like a good summary of the current thinking of Go designers. And I think it’s completely wrong! Statically checking the exhaustiveness of case distinction adds massive value when it comes to writing codebases that are robust to changes/refactoring. This is exactly what my post above was discussing (the reasoning of “we already have open dispatch” and how it misses the strong benefits of exhaustivity checking).
I would have put the spork vs. spoon and fork for interface segregation, not for single responsibility. A spork is really useful outdoors. Also the multi-plug cable being ‘worse’ than multiple single-plug cables seems wrong.
Yeah, multi-plug cables are great. You can keep just one cable instead of 3, and they take up fewer USB slots for charging.
In addition to those two, the power socket example is off, too: you drew a type C power outlet, the kind that’s common in Europe. In the US, we use type A power outlets instead, which use a different plug shape. That’s why you need outlet converters when you travel internationally! A great example of depending on concretions.
This is consistent with the observations we made at $job for most of our customers: when there is an obvious opt-out next to an opt-in button, only about 8–10% of visitors enable analytics.
Tool in a similar vein: https://getvau.lt/
It uses PBKDF2 instead of SHA256-HMAC. If you are like me and have no idea what the difference between the two is, here is a StackOverflow answer that explains it:
PBKDF2 isn’t doing H(H(H(…H(Pwd + salt))…), it’s doing HMAC(Pwd, HMAC(Pwd, … HMAC(Pwd, Salt + number)…), where + denotes concatenation and the number is a block index. So the iterations use the password as the key for the HMAC in each stage, and stages other than the first are HMACing the output of the previous stage. This mixes the password in to every iteration of the result, instead of just the first iteration.
Skips some important details. For example, before the memory controller is initialized, how can you use memory?
… modern CPUs can behave like the original 1978 Intel 8086, which is exactly what they do after power up. In this primitive power up state the processor is in real mode with memory paging disabled. This is like ancient MS-DOS where only 1 MB of memory can be addressed and any code can write to any place in memory - there’s no notion of protection or privilege.
Real mode is supported by the hardware, so the 1st MB of RAM is always usable on x86 machines.
Unfortunately this is false. The memory controller requires initialization, or the memory may not be installed, or the memory might be bad.
So, how do you do the POST errors and beeps?
You run the code completely out of the cache on the CPU before touching real physical DRAM.
This article completely skips this.
Ah, alright.. more info here https://en.wikipedia.org/wiki/Coreboot#Initializing_DRAM Seems that the answer to “how can you even use RAM?” is a proprietary software blob as part of the BIOS.
Haa, ha .. I was trying to use some sarcasm.. but probably not everyone will understand .. I should probably change that..
Is there an ebook for sale (not rent) somewhere? I imagine if there was one it’d be really expensive given that it costs $49.80 to rent for 12 months.
My heart always sinks when I see a super expensive computer science book.
The PDF is free. See the left column on the MIT page where it says “Open Access Title”, or alternatively download it here.
I suspect that you can throw this exact comment for any post that at least tangentially concerns programming languages :)
Talking about details:
My bad that I used word “LISP” instead of “LISP dialect” or even “Clojure” (as a good example of modern lisp dialect)
Also static types, algebraic types, monadic computation expressions, hindley milner type inference, type providers, no nulls by default.
But yes in a very twisted sense every language is a lisp, in the same way that every language is a kind of forth. Amazing what you can do once you disregard most things.
All these are nice things to have, but they still exist on the code level, not on the level of interaction with a programmer. The article talks about things specific to interaction of a language/environment with a programmer.
Although typed holes and type-driven development are an interesting new development in the ergonomics of programming languages, yes.
P.S. In case you are offended by my “calling” strongly statically-typed languages “Lisps”, that was not my intention: my point was that for any programming language on earth a Lisp aficionado can find an obscure research dialect of Lisp that had prototypes of some concepts from that language (or something which is not, but looks similar enough for them).
Not offended, I just felt that it was an overly broad characterization of programming experience. For the record I don’t dislike lisp in any way, and there is statically typed racket. You can recreate pretty much any language feature in any lisp but doing so eventually approaches creating an entirely new language. While you can do this in any language I’ll concede that it’s vastly easier to do this with a lisp or an ML.
[..] for any programming language on earth a Lisp aficionado can find an obscure research dialect of Lisp that had prototypes of some concepts from that language
Interestingly, the Lisp prototype of most of the above features was… ML.
Rather than treating programs as syntactic expressions, we should treat programs as results of a series of interactions that were used to create the program. Those interactions include writing code, but also refactoring, copy and paste or running a bit of program in REPL or a notebook system.
How does this relate to LISP?
every single word is related to any of a lisp dialects. It’s regular way to develop lisp programs. You write function, play around with REPL, make sure it works or fails.
When you only use one language concepts that apply to many many languages appear to only apply to your language. That’s the only way I could assume you could possibly conflate an ML with a lisp.
Don’t you, as a ruby, developer, do most of your initial development in a REPL before saving the structures that work? This is a really common pattern with all scripting languages (and many non-interpreted languages that nevertheless have a REPL).
Quick answer is no.
Long answer - REPL is not integrated with code editor. You cannot tell your editor to run this particular chunk of code. But let’s assume you can integrate ruby REPL with your code editor. I cannot imagine how would you run particular method of some particular class you want to play around with. You have to evaluate the whole classes. But let’s assume it’s okay to evaluate whole class to run one method. What about dependencies? For example you are writing project MVP with rails. Each time you want to test your super lightweight and simple class - you have to load every single dependency, since you cannot attach to the running ruby process.
And I’m not even talking about global immutability, which will add a lot of headache as well.
Ohh, you’re a rails developer. OK, I understand now – having a web server & a web browser in the way makes it hard to do anything iteratively in a REPL.
It’s pretty common, with scripting languages, to load all supporting modules into the REPL, experiment, and either export command history or serialize/prettyprint definitions (if your language stores original source) to apply changes. Image-based environments (like many implementations of smalltalk) will keep your changes persistent for you & you don’t actually need to dump code unless you’re doing a non-image release. All notebook-based systems (from mathematica to jupyter) are variations on the interactive-REPL model. In other words, you don’t need a lisp machine to work this way: substantial amounts of forth, python, julia, and R are developed like this (to choose an arbitrary smattering of very popular languages), along with practically all shell scripts.
Vim & emacs can spawn arbitrary interactive applications and copy arbitrary buffers to them, & no doubt ship with much more featureful integrations with particular languages; I don’t have much familiarity with alternative editors, though I’d be shocked that anybody would call something a ‘code editor’ that couldn’t integrate a REPL.
Clojure has a really good story for working with HTTP servers at the REPL. It’s very common to start a server and redefine a HTTP handler function.
The multithreadedness of that JVM is awesome in this regard.
I think similar things can be done in scala. I mostly mean to say that a web stack represents multiple complicated and largely-inaccessible layers that aren’t terribly well-suited to REPL use, half of which are stuck across a network link on an enormously complex third-party sandboxed VM. Editing HTTP handlers on live servlets is of limited utility when you’re generating code in three different languages & fighting a cache.
Yeah that object oriented focus gets in the way, I get that. Lisp is also not the only functional programming language though.
Also applies to Ocaml, F#, python, ruby.
Edit: lol the article is about F#. That’s what I get for not reading the article I guess.
The original filezilla project has been adding some kind of unwanted extra nonfree software to the releases.
Can you be more specific about what exactly? It would probably also be good to state the motivation for the fork at the beginning of the README.
+1. I get frustrated with forks that don’t provide a straight forward reason for their existence. I mean, it’s open source so that’s your right, but there’s a certain amount of harm that happens to the original project when a fork happens and takes off, and I feel like it’s important to at least be super clear about what you’re doing and why.
I feel like the title ought to say “Web Development” instead of “Web Design”. The article is only tangentially about web design.
It depends on ones understanding of “Web Design”. If it means “Graphic Design” for the Web, then yes, the article has not much obvious relevance.
But if “Design” means making decisions wrt. the product one is building (for the Web), like suggested in the quote:
[…] consider designing your interface such that it’s easier to use by someone who only has use of one arm.
Then “Web Design” means more than looks: the impact of all decisions on the user experience. That includes conceptual, graphic, and technological considerations. As such I found the article to be a nice compilation of the problems we cannot control, where things like SSL, DNS etc. are just the things which are most obviously broken when they do not work.
This may actually be a cultural difference. I’m from Germany. Most—if not all—people I’ve worked with understand “web design” to mean the graphical design aspect; I also think this understanding of the phrase makes more sense, though of course, that may be bias on my part. Perhaps there’s a case to be made for a distinct term to encompass the design of user experiences, something like “UX design”.
Cute extension, but immediately upon opening the Chrome store page I see on the screenshot that it does not handle the URL inside parentheses correctly. Besides
)
, trailing characters like]
,}
,>
,!
, and quotes at least should be trimmed off.Thanks for the catch; the URL regex is fairly rudimentary so it’s bound to happen, I’ll fix it as soon as I can!