After reading that, I wonder why no one started a fork yet. Perhaps if someone does, people will quickly join.
Most people who could & would implement a fork, use PureScript instead.
Because it is very hard and takes a lot of time I’d wager. Few have the time, money or drive to do such a thing.
There’s not a substantial amount of money in maintaining a language like this, so it would pretty much have to be a labour of love.
Under those circumstances how many people would chose to fork an existing language and maintain it rather than create their own?
Because the whole reason people use something like this is that they don’t want to develop and maintain it themselves.
It doesn’t address the core problem. Most OSS companies have a business model that revolves around support. If a large hosting provider like Amazon comes in and provides an “as a service” version, that cuts off a primary revenue stream. If said hosting provider doesn’t produce improvements to the codebase then AGPL doesn’t matter.
I thought AGPL is specially forged to prevent that. Or do you mean that Amazon recreated their own version from scratch?
AGPL says you have to release improvements. It doesn’t make you contribute to the community.
If a community is getting a lot of financial support from a company like Redis Labs paying for core open source work, a company like Amazon can come along and do an as a service version and contribute nothing. AGPL does nothing for that.
The issue is many projects are pushed forward by commercial offerings that rely on support/services as a means to provide financial support. Our open source licenses provide no protection for that model.
Perhaps the model is flawed and we need something better. But there is no protection from parasitic behavior in that case.
It extends further though, in general, there’s no way for an open source community to develop a means to financially support itself and not rely on free labor that is free of concerns. But that’s another topic.
The world has changed around free and open source. They haven’t adjusted to that change beyond AGPL being created to address some issues.
I personally don’t think that commons clause is the right solution but I understand the problem they are looking to solve.
Apologies for any typos. I answered this from my phone.
Companies dual-license under both GPL and AGPL. So, it could be done AGPL with cloud vendors paying a license. There’s a lot of FOSS developers that oppose the AGPL, though.
It absolutely is. Read the FAQ section on the AGPL, it’s very unclear. ‘Many features of the AGPL…’ kind of language. What features? It’s not the Linux kernel, it’s a license, it’s pretty small, just say what these supposed features are.
Of course the reason they don’t is that it’s a smokescreen: the AGPL is of course fine, but their goal isn’t to make the software free, it’s to profiteer off it.
Yes, Redis Labs is in the business of paying people to work on Redis and the Redis ecosystem and needs to make money to do that. The business model for companies such as that is based on support. If someone cuts off that revenue stream the money falls apart. We can as a community accept that such companies will need to build protections for themselves (licenses like common cause or having some closed source components) or accept a world in which there are no companies that exist to support specific products that could be turned into as “as a service” by a large player.
The AGPL does nothing to stop someone like AWS from taking what Redis Labs does and making money off of it and wrecking the Redis Labs business model (which is shared by a number of companies). I commend them for trying an approach that leaves the module source available and even “open” for some segment of the user base. The alternatives are “new business model”, “go out of business”, or starting to make more and more of their offerings closed source.
The AGPL does nothing to stop someone like AWS from taking what Redis Labs does and making money off of it and wrecking the Redis Labs business model (which is shared by a number of companies).
Nonsense. AWS wouldn’t touch an AGPL redis with a ten foot barge pole.
AGPL/commercial dual licensing is actually open source.
I commend them for trying an approach that leaves the module source available and even “open” for some segment of the user base. The alternatives are “new business model”, “go out of business”, or starting to make more and more of their offerings closed source.
Calling this open is literally telling a lie.
Very cool. Currently on Condition, is this meant to be possible with just or(1), and(1), inv(1), is_neg(16) and is_zero(16)? I can’t see how it could be, shouldn’t there be an add(16) and negate(16) for this?
EDIT: I ended up skipping Condition. The rest were all very enjoyable.
It is certainly possible. No need for arithmetic computation, only different ways of comparing against Zero. Maybe you misunderstood something about the specification?
Yeah I think I read it the first time as comparing two numbers instead of just comparing one number to zero. Opened it up and did it quite quickly today.
I think condition is impossible! I did ever other level though.
I emailed the author thanking him for the great game & pointing out this level was not solvable.
Edit: OK I finally did it! It wasn’t impossible, just very hard!
Just git?
I was kind of hoping that if we’re going to break the github hegemony, we might also start to reconsider git at least a little. Mercurial has so many good ideas worth spreading, like templates (DSL for formatting the output of every command), revsets (DSL for querying commits), filesets (DSL for querying files and path) and changeset evolution (meta-graph of commit rewriting).
Don’t forget pijul!
Seriously, though, I don’t think there is any “github plus something” that is going to break the github hegemony. Github won because it offered easy forking while Sourceforge was locked in a centralized model. Sourceforge won because it was so much easier than hosting your own repo + mailing list.
The thing that will get people away from github has to have a new idea, a new use case that isn’t being met by github right now, and which hasn’t been proposed before. That means that adding hg won’t do it – not because hg is worse than git (honestly, git’s terrible, and hg is fine), but because hg’s already been an option and people aren’t using it.
Adding email commits won’t do it, because that use case has been available for a long time (as pointed out elsewhere in these comments) and people aren’t using it.
Until something new is brought to the table, it’s all “let’s enter a dominated market with a slight improvement over the dominant tech”, and that’s just not going to be enough.
So, one thing that I would use a new contender for is being able to put my work under my own domain.
The “new thing” here is “have your personal branding on your site” (which is clearly fairly popular given how common personal domain/sites are among developers).
If I could CNAME code.daniel.heath.cc to your host to get my own github, I’d do it today (as long as any issues/wiki/PR state/etc remained usefully portable).
That’s a really neat idea. I don’t think I can prioritize it right now but it’s definitely something I would consider implementing.
I actually think that GitHub’s lack of branding and customization is a big reason for its success. When I go take a look at a new project on GitHub, I don’t have to figure out how to navigate a new site’s design, and this makes the GitHub ecosystem as a whole easier to use.
I don’t mean corporate/design branding.
I want to use my own name (and be able to move providers without breaking links).
I want to use my own name (and be able to move providers without breaking links).
But that will happen anyway, unless your new provider uses the same software as the old one.
That makes sense actually. sr.ht supporting the ability to use your own domain name (presumably a subdomain of your personal domain name for personal projects?) would make it really easy to migrate away from sr.ht in the future if you felt it was more cost-effective to host your own. Although I don’t know what the pricing model is intended to be.
You can do that with Gitlab (or Gitea if you prefer something lightweight). Only thing is you need to take care of the hosting yourself. But I’m sure there are companies offering a one-click setup, to which you can later point your own domain.
If you host your own gitlab instance, can you fork and submit patches to a project that’s hosted on gitlab,com, as easily/seamlessly as if you were hosted there?
Centralization has benefits that self-hosting can’t always provide. If there were some federation which allowed self-hosting to integrate with central and other self-hosting sites, that seems like a new and interesting feature.
Git is already federated with email - it’s specific services like GitHub which are incompatible with git’s federation model (awfully conveniently, I might add). sr.ht is going to be designed to accomodate git’s email features, both for incoming and outgoing communication, so you’ll be able to communicate easily between sr.ht instances (or sr.ht and other services like patchworks or LKML).
As I mention earlier, though, federation by email has been available for a long time and hasn’t been used (by enough people to replace github). The (vast) majority of developers (and other repo watchers) prefer a web UI to an email UI.
The gitlab, gitea, and gogs developers are working on this but it’s still very much in the discussion stage at this point. https://github.com/git-federation/gitpub/
I don’t know exactly what he was looking for, but It seemed like one of:
The latter sounds to me like it would need federation.
It’s currently awkward to run multiple domains on most OSS servers which might otherwise be suitable.
hg isn’t really an option right now, though. There’s nowhere to host it. There’s bitbucket, and it’s kind of terrible, and they keep making it worse.
If you can’t even host it, people won’t even try it.
I’m afraid you’re not going to find a sympathetic ear in sr.ht. I am deeply fond of git and deeply critical of hg.
The GitHug hegemony has nothing to do with its basis on git. If git were the product of GitHub, I might agree, but it’s not. If you really want to break the GitHub hegemony you should know well enough to throw your lot in with the winning tool rather than try to disrupt two things at once.
Perhaps some day I’ll write a blog post going into detail. The short of it is that git is more Unixy, Mercurial does extensibility the wrong way, and C is a better choice than Python (or Rust, I hear they’re working on that).
because hg‘s command-line interface was “designed”, whereas git’s command-line interface “evolved” from how it was being used.
The GitHug hegemony has nothing to do with its basis on git.
Exactly; it’s the other way around. Git got popular because of github.
Git was much worse before github made it popular. It’s bad now and difficult to use now, but it was much worse before 2008. So if you just want to get away from Github, there’s no need to stay particulary enamoured with git either.
And whatever criticisms you may have about hg, you have to also consider that it has good ideas (those DSLs above are great). Those ideas are worth spreading, and git for a long time has tried to absorb some of them and hasn’t succeeded.
Cool visualisations, although I wonder how well they’ll work without Javascript or on mobile. Kudos to them for adding ‘Heads up, you’re about to experience some scroll-driven animations. If you’d like to skip that, you can jump ahead to the final state.’
The issue itself is pretty funny. There are some pretty obvious solutions, like buying jeans with bigger pockets. I suspect the reason is relatively simple: pockets are needed less when most women carry a bag with them everywhere they go, while most men don’t.
Probably better not to have too many gender politics posts here tho.
My wife carries bags mostly because pockets on women’s clothes are ridiculous and because your solution while theoretically sound, fails miserably in practice if you cannot find such clothes.
This issue might be funny to you, but at this point is just frustration for her and to be honest for me too.
An extreme example of unportable C is the book Mastering C Pointers: Tools for Programming Power, which was castigated recently. To be fair, that book has other flaws rather than being in a different camp, but I think that fuels some of the intensity of passion against it.
This… rather grossly undersells how much is wrong with that book. The author didn’t understand scope, for crying out loud, and never had a grasp of how C organized memory, even in the high-level handwavy “C Abstract Machine” sense the standard is written to.
There are better examples of unportable C, such as pretty much any non-trivial C program written for MS-DOS, especially the ones which did things like manually writing to video memory to get the best graphics performance. Of course, pretty much all embedded C would fit here as well, but you’ll actually be able to get and read the source of some of those MS-DOS programs.
In so doing, the committee had to converge on a computational model that would somehow encompass all targets. This turned out to be quite difficult, because there were a lot of targets out there that would be considered strange and exotic; arithmetic is not even guaranteed to be twos complement (the alternative is ones complement), word sizes might not be a power of 2, and other things.
Another example would be saturation semantics for overflow, as opposed to wraparound. DSPs use saturation semantics, so going off the top end of the scale plateaus, instead of causing a weird jagged waveform.
As for the rest, it’s a hard problem. Selectively turning off optimization for specific functions would be useful for some codebases, but aggressive optimization isn’t the only problem here: Optimization doesn’t cause your long type to suddenly be the wrong size to hold a pointer on some machines but not others. Annotating the code with machine-checked assumptions about type size, overflow behavior, and maybe other things would allow intelligent warnings about stupid code, but… well… try to get anyone to do it.
Re “Mastering C Pointers,” that’s fair. I included it because it’s one of the things that got me thinking about the unportable camp, but I can see how its (agreed, very serious) flaws might detract from the overall argument I’m making and that there might be a better example.
Re saturating arithmetic, well, Rust has it :)
My interpretation is that the point of C is that simple C code should lead to simple assembly code. Needing to write SaturatedArithmetic::addWithSaturation(a, b) instead of just a + b in all arithmetic DSP code would be quite annoying, and would simply lead to people using another language.
You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behaviour. The only construct in C that can hide complexity is the function call, which everyone recognises. But if you see some arithmetic, you know it’s just arithmetic.
You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behavior. The only construct in C that can hide complexity is the function call, which everyone recognizes. But if you see some arithmetic, you know it’s just arithmetic.
Not to mention that not everything can be overloaded, causing inconsistencies, and some operations in mathematics have operators other than just “+-/*”. Vector dot product “·”, for example. Even if CPP (or any other language) extends to support more operators, these operators can’t be reached without key composition (“shortcuts”), making it almost unwanted. vec_dot() might require typing more, but it’s reachable to everyone, and operators don’t need to have hidden meanings.
Perl does have more operators than C, but all of them are operators that can be typed using simple key composition, such as [SHIFT+something]. String concatenation for example.
My point, added with what @milesrout said, is that some operators (math operators) aren’t easy to type with just [SHIFT+something]. As result, operator overloading in languages that offer operator overloading will always stay in a unfinished state, because it will only compromise those operators that are easily composed.
Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.
Herbert Schildt’s C: The Complete Reference is often touted as the worst C book ever and here.
Perhaps Mastering C Pointers is the worst in its niche (i.e., pointers) and Schildt’s is a more general worst?
Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.
So? One of the dangers of picking the wrong textbook is thinking it’s great, and using it to evaluate subsequent works in the field, without knowing it’s shit. Per hypothesis, if it’s your first book, you don’t know enough to question it, and if you think it’s teaching you things, those are the things you’ll go on to know, even if they’re the wrong things. It’s a very pernicious bootstrap problem.
In this case, the book is objectively terrible. Other books being bad don’t make it better.
I do agree that Schildt’s book is also terrible.
Never heard of it, but it seems like a super interesting approach to interactive environment. I cannot help but remember this Bret Victor’s talk about how we have been programming in almost-anachronistic ways with no innovation in the interfaces.
There’s nothing obsolete about text. Visual languages don’t work. They’ve been tried hundreds of times, to no avail, because GUIs are fundamentally bad user interfaces for experienced users. Text is a better interface for power users, and programming languages are for power users.
Why can’t I re-sort the definitions in my source instead of scrolling around then? Why is it hard to see a dependency graph for all my functions? Why do I have to jump between files all the time? Text - an interface for linear presentation of information - is fundamentally a kludge for code, which is anything but linear.
Why can’t I re-sort the definitions in my source instead of scrolling around then?
Sort them by what? It wouldn’t be difficult to write a script using the compiler module of Python to reorder the declarations in your file in an order you chose, which you could then use to replace the text of a buffer in your text editor. But usually I’d suggest what you want is to see a list of definitions in a particular order, which you could then use to jump to the definitions.
In every case that I’ve seen of not using plain text, it inevitably become inscrutable. What is actually in my Smalltalk/Lisp image? What is actually there? What can people get out of it later when I deploy it?
Why is it hard to see a dependency graph for all my functions?
Because nobody has written something that will take your source files, determine their dependencies, and produce the DOT output (a very popular text-based format for graphs, far superior in my opinion to any binary graph description format) for that graph? It’s not like it’s particularly difficult.
Why do I have to jump between files all the time?
Because it turns out it’s useful to organise things into parts. Because it turns out it’s useful to be able to parallelise compilation and not reparse every bit of code you’ve ever written every time you change any part of it.
I think that it’s definitely a requirement of any decent programming language to have a way to easily take the source code of that programming language and reify it into a syntax tree, for example. That’s very useful to have in a standard library. In Lisp it’s just read, Python has more complex syntax and requires more machinery which is in a standard library module, other languages have similar things.
One point might be: maybe you don’t need a dependency graph if you can just make your code simpler, maybe you don’t need to jump around files much if your code is properly modularised (and you have a big enough screen and narrow enough maximum line length to have multiple files open at once), maybe sorting your definitions is wrong and what you want is a sortable list of declarations you can jump to the definitions.
Not to mention that version control is important and version controlling things that aren’t text is a problem with conventional version control tools. Might not be an issue, you have your own VCS, but then you enter the land of expecting new users of your language to not only not use their standard editor, but also to not use their standard VCS, not use their standard pastebin, etc. How do you pastebin a snippet of a visual language so someone on an IRC channel can see it and give you help? How do you ask questions on StackOverflow about a visual language?
It’s not even an issue of them being unusual and unsupported. By their very nature, not using text means that these languages aren’t compatible with generic tools for working with text. And never will be. That’s the thing about text, rather than having many many many binary formats and few tools, you have one binary format and many many tools.
Hey Miles, thanks for elaborating. I think we could have more interesting discussions if you give me a bit more credit and skip the trivial objections. You’re doing the same thing you did last time with C++ compilers. Yes, I know I could write a script, it’s not the point. I’m talking about interactive tools for source code analysis and manipulation, not a one-off sort.
I don’t agree with your objections about parallel compilation and parsing. It seems to me that you’re just thinking about existing tools and arguing from the status quo.
Further down, you make a suggestion which I interpret as “better languages could mitigate these issues” which is fair, but again I have to disagree because better languages always lead to more complex software which again requires better tools, so that’s a temporary solution at best.
You also raise a few objections, and here I should clarify that what I have in mind is not some kind of visual flowchart editor. What I’m claiming is that the conflation of internal representation and visual representation for code is counterproductive, but I think that a display representation that mostly looks like text is fine (as long as it’s actually within a structured editor). What I’m interested in is being able to manipulate symbols and units of code as well as aspects of its structure rather than individual characters.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
I think the practitioners and particularly academics in our field should have more sense of possibilities and less affection for things the way they are.
When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there.
Existing VCS could work reasonably well if the serialisation/“text projection” was deterministic and ‘stable’, i.e. minimising the amount of spurious changes like re-ordering of definitions, etc. As a first approximation I can imagine an s-expression language arranging the top-level expressions into lexicographic order, spreading them out so each sub-expression gets its own line, normalising all unquoted whitespace, etc. This would be like a very opinionated gofmt.
If users wan’t to preserve some layout/etc. then the editor can store that as metadata in the file. I agree that semantics-aware diffing would be great though ;)
So you always end up separating the storage format from display representation in order to create better tools, which is exactly my point.
Yes, I agree with your points. Was just remarking that some of these improvements (e.g. VCS) are easier to prototype and experiment with than others (e.g. semantics-aware queries of custom file formats).
The way I see it is that there are tools for turning text into an AST and you can use them to build the fancy things you want. My point wasn’t ‘you can write that sort as a one-off’. You can edit code written in a text-based programming language with a really fancy editor that immediately parses it to an AST and works with it as an AST, and only turns it into text when written to disk. I have no problem with that. But really you’re still editing text when using something like paredit.
Something like vim but where the text objects are ‘identifier’, ‘ast node’, ‘expression’, ‘statement’, ‘logical line of code’, ‘block’, etc. rather than ‘text between word separators’, ‘text between spaces’, ‘line’, etc. would be a useful thing. In fact, you could probably do this in vim. I have an extension I use that lets you modify quotes around things taking into account escaped quotes within, etc. That’d probably work way better if it had that default structure for normal text and then could be customised to actually take into account the proper grammar of particular programming languages for which that is supported.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I do find it useful that I can do ci( and vim will delete everything inside the nearest set of parentheses, properly taking into account nesting. So if I have (foo (hello 1 2 3) bar) and my cursor is on the a, it’ll delete everything, even though the nearest ( and ) are beside hello and not foo. That kind of thing, more structured editing? I’m all for that.
Consequently, for pastebin or StackOverflow, you could just paste some text projection of the code, no problem. When it comes to VCS, well, the current situation is quite poor, so I’d welcome better tools there. For example, if there was a VCS that showed me diffs that take into account the semantics of the language (eg like this: https://www.semanticmerge.com), that would be pretty cool.
Ultimately I think if you have a recognised standardised text projection of your code, you might as well just make that the standardised format for it, then your fancy editor or editor plugin can parse it into the structures it needs. This helps ensure you can edit code over SSH, and have a variety of editors compatible with it, rather than just the single language-designer-provided IDE.
One of the nice things about git is that it stores snapshots internally rather than diffs. So if you have a language-specific tool that can produce diffs that are better due to being informed by the grammar of the language (avoiding the problem of adding a function and the diff being ‘added a new closing brace to the previous function then writing a new function except for a closing brace’, for example), then you can do that! Change the diff algorithm.
For the rest of your objections, I offer this analogy: imagine that we only had ASCII pictures, and none of this incompatible JPG/PSD/PNG nonsense with few complicated tools. Then we could use generic tools for working with text to manipulate these files, and we wouldn’t be constrained in any way whether we wanted to create beautiful paintings or complex diagrams. That’s the thing about text!
Well I mean I do much prefer creating a graph by writing some code to emit DOT than by writing code to emit PNG. I did so just the other day in fact. http://rout.nz/nfa.svg. Thank god for graphviz, eh?
Note that there’s also for example farbfeld, and svg, for that matter: text-based formats for images. Just because it’s text underneath doesn’t mean it has to be rendered as ASCII art.
Cool, I’m glad we can agree that better tools would be good to have.
As far as the storage format, I don’t actually have a clear preference. What’s clearly needed is a separation of storage format and visual representation. If we had that, arguments about tabs vs spaces, indent size, let/in vs where, line length, private methods first or public methods first, vertical vs horizontal space (and on and on) could be nullified because everybody could arrange things however they like. Why can’t we have even such simple conveniences? And that’s just the low hanging fruit, there are far more interesting operations and ways of looking at source that could be implemented.
The other day there was a link to someone’s experiment (https://github.com/forest-lang/forest-compiler) where they use one of the text projections as the storage format. That might work, but it seems to me that the way parsing currently happens, there’s a lot of unnecessary work as whole files are constantly being reparsed because there is no structure to determine the relevant scope. It seems that controlling operations on the AST and knowing which branches are affected could be a lot more efficient. I’m sure there’s plenty of literature of this - I’ll have to look for it (and maybe I’m wrong about this).
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language, where you can’t use the same tools with multiple languages. And then having to reimplement the same basic functionality for every single language in separate IDEs for each, where everything works slightly differently.
I understand your concern, but this sounds exactly like the current state of affairs (other than really basic stuff like syntax highlighting maybe). There’s a separate language plugin (or plugins) for every combination of editor/IDE and language, and people keep rewriting all that stuff every time a new editor becomes popular, don’t they?
One of the nice things about git is that it stores snapshots internally rather than diffs.
Sure, we can glean a bit more information from a pair of snapshots, but still not much. It’s still impossible to track a combination of “rename + change definition”, or to treat changes in the order of definitions as a no-op, for example. Whereas if we were tracking changes in a more structured way (node renamed, sub-nodes modified etc.), it seems like we could say a lot more meaningful things about the evolution of the tree.
Thank god for graphviz, eh?
Perhaps the analogy was unclear. Being able to write a set of instructions to generate an image with a piece of software has nothing to do with having identical storage format and visual representation. If we approached images the same way we approach code, we would only have ASCII images as the output format, because that’s what is directly editable with text tools. Since you see the merits of PNG and SVG, you’re agreeing that there’s merit in separating internal/storage representation from the output representation.
What I’m concerned about is the idea that it’s a good idea to store code in a proprietary binary file format that’s different for every language
I might have missed something, but I didn’t see anyone proposing this.
In particular, my understanding of Luna is that the graphical and textual representations are actually isomorphic (i.e. one can be derived if given the other). This means we can think of the textual representation as the being both a traditional text-based programming language and as a “file format” for serialising the graphical programming language.
Likewise we can switch to a text view, use grep/sed/etc. as much as we like, then switch back to a graphical view if we want (assuming that the resulting text is syntactically valid).
Tools that improve navigation within textual source have existed for a long time. I’ve been using cscope to bounce around in C and Javascript source bases for as long as I can remember. The more static structure a language has, the easier it is to build these tools without ambiguity. The text source part isn’t really an issue – indeed it enables ad hoc tooling experiments to be built with existing text management tools; e.g., grep.
Those tools aren’t text, though. They’re other things the augment the experience over just using text which becomes an incidental form of storage. Tools might also use AST’s, objects, data flows, constraints, and so on. They might use anything from direct representation to templates to synthesis.
I think the parent’s point was just text by itself is far more limited than that. Each thing I mentioned is available in some programming environment with an advantage over text-driven development.
I think it’s wrong to say that the text storage is incidental. Line-oriented text files are about the lowest common denominator way we have to store data like this.
For starters, it’s effectively human-readable – you can lift the hood up and look at what’s underneath, understanding the effect that each individual character has on the result. Any more complicated structure, as would be generally required to have a more machine-first structured approach to program storage, is not going to have that property; at least not to the same extent.
If this thread demonstrates anything, it’s that we all have (at times, starkly!) different preferences for software engineering tools. Falling back on a textual representation allows us to avoid the need to seek consensus on a standard set of tools – I can use the editor and code manipulation tools that make sense to me, and you can stick to what makes sense to you. I think a lot of the UNIX philosophy posturing ends up being revisionist bunk, but the idea that text is a pretty universal interface for data interchange isn’t completely without merit.
The under-the-hood representation is binary-structured electricity that gets turned into human-readable text by parsing and display code. If already parsing it and writing display code, then one might just as well use a different encoding or structure. Text certainly has advantages as one encoding of many to have available. Plugins or input modules can take care of any conversions.
Text does often have tooling advantages in systems like UNIX built with it in mind, though.
I think it’s a reductionist argument for the good-enough, hard earned status quo. I think it can be valid, but only within a very narrow perspective - operational and short term.
To my mind, your position is equivalent to this: we should only have ASCII images, and we don’t need any of that PNG/JPG/PSD stuff with complicated specialised tools. Instead, we can use generic text tools to make CAD drawings, diagrams, paintings - whatever. All of those things can be perfectly represented in ASCII, and the text tools will not limit us in any way!
I want to search my code like a database, e.g. “show my where this identifier is used as a parameter to a function” - the tooling for text doesn’t support this. Structured tooling would be super useful.
Many things can be “queried” with grep and regular expressions. Which is also great to find “similar occurrences” that need to be checked but are only related by some operators and function calls following another. But on the other hand I’d definitely argue that IDEs at least have a tiny representation of the current source file for navigation or something and that you can click some token and find its uses, definitions, implementations … But it only works if I disable the low power mode. And with my 8Gb RAM MacBook I sometimes have to kill the IDE before running the program to make sure I can still use it at the same time.
Maybe if it wasn’t parsing and re-parsing massive amounts of text all the time, it would be more energy efficient…
Exactly. And it could extend beyond search; code could be manipulated and organised in more powerful ways. We still have rudimentary support for refactoring in most IDEs, and so we keep going through files and manually making structurally similar changes one by one, for no reason other than the inadequate underlying representation used for code.
I could be wrong and maybe this is impossible to implement in any kind of general way beyond the few specific examples I’ve thought of, but I find it strange that most people dismiss the very possibility of anything better despite the fact that it’s obviously difficult and inconvenient to work with textual source code.
The version of cscope that I use does things of that nature. The list of queries it supports:
Find this C symbol:
Find this global definition:
Find functions called by this function:
Find functions calling this function:
Find this text string:
Change this text string:
Find this egrep pattern:
Find this file:
Find files #including this file:
Find assignments to this symbol:
I use Find functions calling this function a lot, as well as Find assignments to this symbol. You could conceivably add more query types, and I’m certain there are other tools that are less to my admittedly terminal-heavy aesthetic preference that offer more flexible code search and analysis.
The base structure of the software being textual doesn’t get in the way of this at all.
Software isn’t textual. We read the text into structures. Our tools should make these structures easier to work with. We need data structures other than text as the common format.
Can I take cscope’s output and filter down to “arguments where the identifiers are of even length”?
Compilers and interpreters use structured representations because those representations are more practical for the purposes of compiling and interpreting. It’s not a given that structured data is the most practical form for authoring. It might be. But what the compiler/interpreter does is not evidence of that.
I would also be interested on your thoughts about Lisp where the code is already structured data. This is an interesting property of Lisp but it does not seem to make it clearly easier to use.
but it does not seem to make it clearly easier to use.
Sure it does: makes macros easier to write than a language not designed like that. Once macros are easy, you can extend the language to more easily express yourself. This is seen in the DSL’s of Common LISP, Rebol, and Racket. I also always mention sklogic’s tool since he DSL’s about everything with a LISP underneath for when they don’t work.
Sure, but all of these tools (including IDEs) are complicated to implement, error-prone, and extremely underpowered. cscope is just a glorified grep unless I’m missing something (I haven’t used it, just looked it up). The fact that you bring it up as a good example attests to the fact that we’re still stuck somewhere near mid-twentieth century in terms of programming UI.
I bring it up as a good example because I use it all the time to great effect while working on large scale software projects. It is relatively simple to understand what it does, it’s been relatively reliable in my experience, and it helps a lot in understanding the code I work on. I’ve also tried exuberant ctags on occasion, and it’s been pretty neat as well.
I don’t feel stuck at all. In fact, I feel wary of people attempting to invalidate positive real world experiences with assertions that merely because something has been around for a long time that it’s not still a useful way to work.
Have you noted, that the Luna language has dual representation? Where each visual program has an immediate and easily editable text representation, and the same is true in the other direction as well? This is intended to be able to keep the benefits of the text interface, while adding the benefits of a visual representation! That’s actually the main idea behind Luna.
What about the power users who use things like Excel or Salesforce? These are GUIs perfectly tailored to specific tasks. A DJ working with a sound board certainly wouldn’t want a textual interface.
Textual interfaces are bad, but they are generic and easy to write. It’s a lot harder to make an intuitive GUI, let alone one that works on something as complex as a programming language. Idk if Luna is worthwhile, but text isn’t the best user interface possible imho
DJs use physical interfaces, and the GUIs emulation of those physical interfaces are basically all terrible.
I’ve never heard of anyone liking Salesforce, I think that must be Stockholm Syndrome. Excel’s primary problem in my opinion is that it has essentially no way of seeing how data is flowing around. If something had the kind of ‘reactive’ nature of Excel while being text-based I’d much prefer that.
Textual interfaces are excellent. While there are tasks that benefit from a GUI - image editing for example - in most cases GUIs are a nicer way of representing things to a new user but are bad for power users. I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I wouldn’t expect first year computer science students to use vim, as it’s not beginner-friendly, but it’s by far the best text editor out there in the hands of an experienced user.
I’d call myself an “experienced user” of vim. I’ve written extensions, given workshops, and even written a language autoindent plugin, which anyone who’s done it knows is like shoving nails through your eyeballs. About once a year I get fed up with the limitations of text-only programming and try to find a good visual IDE, only to switch back when I can’t find any. Just because vim is the best we currently have doesn’t mean it’s actually any good. We deserve better.
(For the record, vim isn’t beginner-unfriendly because it’s text only. It’s beginner-unfriendly because it’s UI is terrible and inconsistent and the features are all undiscoverable.)
Most people don’t bother to learn vimscript properly, treating it much like people treated Javascript for years: a bunch of disparate bits they’ve picked up over time, with no unifying core. But once you actually learn it, it becomes much easier to use and more consistent. The difference between expressions and commands becomes sensible instead of seeming like an inconsistency.
I never get fed up with the limitations of text-only programming, because I don’t think they exist. Could you elaborate on what you are saying those limitations are?
And I totally, 100% disagree with any claim that vim’s UI is bad or inconsistent. On the contrary, it’s extremely consistent. It’s not a bunch of little individual inconsistent commands, it’s motions and text objects and such. It has extensive and well-written help. Compared to any other IDE I’ve used (a lot), it’s way more consistent. Every time I use a Mac program I’m surprised at how ad-hoc the random combinations of letters for shortcuts are. And everything requires modifier keys, which are written with ridiculous indecipherable symbols instead of ‘Ctrl’ ‘Shift’ ‘Alt’ etc. Given that Mac is generally considered to be very easy to use, I don’t think typical general consensus on ease of use is very instructive.
Bret Victor explains the persistence of textual languages as resistance to change, drawing an equivalence between users of textual languages now and assembly programmers who scoffed at the first higher-level programming languages. But this thread is evidence that at least some people are interested in using a language that isn’t text-based. Not everyone is fairly characterized by Bret Victor’s generalization. So then why hasn’t that alternative emerged? There are plenty of niche languages that address a minority preference with reasonable rates of adoption. With the exception of Hypercard, I can’t think of viable graphical programming language. Even Realtalk, the language that runs Dynamicland (Bret Victor’s current focus), is text-based, being a superset of Lua. I keep hearing about how text-based languages are old-fashioned and should die out, but I never hear anything insightful about why this hasn’t happened naturally. I’m not denying that there are opportunities for big innovation but “make a visual programming language” seems like an increasingly naive or simplistic approach.
I think it has to do with the malleability of text. There’s a basic set of symbols and one way to arrange them (sequentially.) Almost any problem can be encoded that way. Emacs’ excellent org-mode is a testament to the virtue of malleability.
Excel also has that characteristic. Many, many kind of problems can be encoded in rectangles of text with formulas. (Though I might note that having more ways to arrange things allows new kinds of errors, as evidenced by the growing cluster of Excel features for tracing dependencies & finding errors.)
Graphical languages are way less malleable. The language creator decides what elements, relations, and constraints are allowed. None of them let me redefine what a rectangle represents, or what relations are allowed between them. I think that’s why these languages can be great at solving one class of problem, but a different class of problem seems to require a totally different graphical language.
My suspicion is that it’s because graphical languages merge functionality and aesthetics, meaning you have to think very, VERY hard about UI/UX and graphic design. You need to be doing that from the start to have a hope of it working out.
Picked this up on Early Access and it’s lots of fun. Personally the learning curve seems less steep as compared to other Zachtronics games, so if you got frustrated early with others, or haven’t tried any before, this is a good entrypoint. Presentation-wise, it’s top-notch. The pixel art graphics are crisp, the music is great as always, and the varied mission types make the gameplay more varied than previous Zachtronics releases.
The puzzling is compelling enough to keep me playing, but the plot is unfortunately pretty underwhelming on this one so far. Hopefully that’s something that’s still being worked on, or that it turns a corner towards the end of the game.
Let’s be honest these games are not about plot, they’re programming games. I really enjoyed TIS-100, didn’t like SpaceChem or Opus Magnum at all though. This one looks good: more like the former than the latter - actually involving writing code.
Sure, nobody plays puzzle games for the plot, but the designers put one in, and IMHO feels half-baked enough to detract from the overall experience. The [rot13]Bar unpx, bar qeht qbfr[/rot13] thing is set up as if it’s a core mechanic but never really mentioned after the beginning, characters float in and out, and nothing I’m doing (hacking, battling, conversation choices in cutscenes, etc) feels like it has any tension or consequence beyond [rot13]nzhfvat RZORE-2 naq cbffvoyl pnhfvat fbzr yvarf bs pung va na VEP punaary fbzrjurer[/rot13], leaving me feel weirdly isolated in the world. (That feeling of disquieting solitude played to TIS-100’s strengths, given where that game goes, but I don’t think it works here.)
edit: to be clear the game is good and everyone should buy this game
Good talk.
I recently used systemd “in anger” for the first time on a raspi device to orchestrate several scripts and services, and I was pleasantly surprised (but also not surprised, because the FUD crowd is becoming more and more fingerprintable to me). systemd gives me lifecycle, logging, error handling, and structure, declaratively. It turns out structure and constraints are really useful, this is also why go has fast dependency resolution.
It violates unix philosohpy
That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.
The declarative units are definitely a plus. No question.
I was anti-systemd when it started gaining popularity, because of the approach (basically kitchen-sinking a lot of *NIX stuff into a single project) and the way the project leader(s) respond to criticism.
I’ve used it since it was default in Debian, and the technical benefits are very measurable.
That doesnt mean the complaints against it are irrelevant though - it does break the Unix philosophy I think most people are referring to:
Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new “features”.
If you believe composability (one program’s output is another program’s input) is an important part of The Unix Philosophy, then ls violates it all day long, always has, likely always will. ls also violates it by providing multiple ways to sort its output, when sort is right there, already doing that job. Arguably, ls formatting its output is a violation of Do One Thing, because awk and printf exist, all ready to turn neat columns into human-friendly text. My point is, The Unix Philosophy isn’t set in stone, and never has been.
Didn’t ls predate the Unix Philosophy? There’s a lot of crufthistory in unix. dd is another example.
None of that invalidates the philosophy that arose through an extended design exploration and process.
nobody said it’s set in stone; it’s a set of principles to be applied based on practicality. like any design principle, it can be applied beyond usefulness. some remarks:
i don’t see where ls violates composability. the -l format was specifically designed to be easy to grep.
People have written web pages on why parsing the output of ls is a bad idea. Using ls -l doesn’t solve any of these problems.
As a matter of fact, the coreutils people have this to say about parsing the output of ls:
However ls is really a tool for direct consumption by a human, and in that case further processing is less useful. For futher processing,
find(1)is more suited.
Moving on…
the sorting options are an example of practicality. they don’t require a lot of code, and would be much more clumsy to implement as a script (specifically when you don’t output the fields you’re sorting on)
This cuts closer to the point of what we’re saying, but here I also have to defend my half-baked design for a True Unix-y ls Program: It would always output all the data, one line per file, with filenames quoted and otherwise prepared such that they always stick to one column of one line, with things like tab characters replaced by \t and newline characters replaced by \n and so on. Therefore, the formatting and sorting programs always have all the information.
But, as I said, always piping the output of my ls into some other script would be clumsier; it would ultimately result in some “human-friendly ls” which has multiple possible pipelines prepared for you, selectable with command-line options, so the end result looks a lot like modern ls.
about formatting, i assume you’re referring to columniation, which to my knowledge was not in any version of ls released by Bell Labs. checking whether stdout is a terminal is indeed an ugly violation.
I agree that ls shouldn’t check for a tty, but I’m not entirely convinced no program should.
just because some people discourage composing ls with other programs doesn’t mean it’s not the unix way. some people value the unix philosophy and some don’t, and it’s not surprising that those who write GNU software and maintain wikis for GNU software are in the latter camp.
your proposal for a decomposed ls sounds more unixy in some ways. but there are still practical reasons not to do it, such as performance and not cluttering the standard command lexicon with ls variants (plan 9 has ls and lc; maybe adding lt, lr, lu, etc. would be too many names just for listing files). it’s a subtle point in unix philosophy to know when departing from one principle is better for the overall simplicity of the system.
With all due respect[1], did your own comment hit your fingerprint detector? Because it should. It’s extrapolating wildly from one personal anecdote[2], and insulting a broad category of people without showing any actual examples[3]. Calling people “markov chains” is fun in the instant you write it, but contributes to the general sludge of ad hominem dehumanization. All your upvoters should be ashamed.
[1] SystemD arouses strong passions, and I don’t want this thread to devolve. I’m pointing out that you’re starting it off on the wrong foot. But I’m done here and won’t be responding to any more name-calling.
[2] Because God knows, there’s tons of badly designed software out there that has given people great experiences in the short term. Design usually matters in the long term. Using something for the first time is unlikely to tell you anything beyond that somebody peephole-optimized the UX. UX is certainly important, rare and useful in its own right. But it’s a distinct activity.
[3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?
[3] I’d particularly appreciate a link to NeoVim criticism for being anti-Unix. Were they similarly criticizing Vim?
Yes, when VIM incorporated a terminal. Which is explicitly against its design goals. From the VIM 7.4 :help design-not
VIM IS... NOT *design-not*
- Vim is not a shell or an Operating System. You will not be able to run a
shell inside Vim or use it to control a debugger. This should work the
other way around: Use Vim as a component from a shell or in an IDE.
A satirical way to say this: "Unlike Emacs, Vim does not attempt to include
everything but the kitchen sink, but some people say that you can clean one
with it. ;-)"
Neo-VIM appears to acknowledge their departure from VIM’s initial design as their :help design-not has been trimmed and only reads:
NVIM IS... NOT design-not
Nvim is not an operating system; instead it should be composed with other
tools or hosted as a component. Marvim once said: "Unlike Emacs, Nvim does not
include the kitchen sink... but it's good for plumbing."
Now as a primarily Emacs user I see nothing wrong with not following the UNIX philosophy, but at it is clear that NeoVIM has pushed away from that direction. And because that direction was an against their initial design it is reasonable for users that liked the initial design to criticism NeoVIM because moving further away from the UNIX philosophy.
Not that VIM hadn’t already become something more than ‘just edit text’, take quickfix for example. A better example of how an editor can solve the same problem by adhering to the Unix Philosophy of composition through text processing would be Acme. Check out Acme’s alternative to quickfix https://youtu.be/dP1xVpMPn8M?t=551
akkartik, which part of my comment did you identify with? :) FWIW, I’m fond of you personally.
I’d particularly appreciate a link to NeoVim criticism for being anti-Unix
Every single Hacker News thread about Neovim.
Were they similarly criticizing Vim?
Not until I reply as such–and the response is hem-and-haw.
To be fair I don’t think the hacker news hive mind is a good judge of anything besides what is currently flavour of the week.
Just yesterday I had a comment not just downvoted but flagged and hidden-by-default, because I suggested Electron is a worse option than a web app.
HN is basically twitter on Opposite Day: far too happy to remove any idea even vaguely outside what the group considers “acceptable”.
Indeed, I appreciate your comments as well in general. I wasn’t personally insulted, FWIW. But this is precisely the sort of thing I’m talking about, the assumption that someone pushing back must have their identity wrapped up in the subject. Does our community a disservice.
OTOH, I spent way too much of my life taking the FUD seriously. The mantra-parroting drive-by comments that are common in much of the anti-systemd and anti-foo threads should be pushed back. Not given a thoughtful audience.
https://news.ycombinator.com/item?id=7289935
The old Unix ways are dying… … Vim is, in the spirit of Unix, a single purpose tool: it edits text.
https://news.ycombinator.com/item?id=10412860
thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd
Also just search for “vim unix philosophy” you’ll invariably find tons of imaginary nonsense:
Please don’t make me search /r/vim :D
thinks that anything that is too old clearly has some damage and its no longer good technology, like the neovim crowd
That’s not saying that neovim is ‘anti-Unix philosophy’, it’s saying that neovim is an example of a general pattern of people rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.
Which is indeed a general pattern.
That’s not saying that neovim is ‘anti-Unix philosophy’
It’s an example of (unfounded) fear, uncertainty, and doubt.
rewriting and redesigning old things that work perfectly well on the basis that there must be something wrong with anything that’s old.
That’s a problem that exists, but attaching it to project X out of habit, without justification, is the pattern I’m complaining about. In Neovim’s case it’s completely unfounded and doesn’t even make sense.
It’s not unfounded. It’s pretty obvious that many of the people advocating neovim are doing so precisely because they think ‘new’ and ‘modern’ are things that precisely measure the quality of software. They’re the same people that change which Javascript framework they’re using every 6 weeks. They’re not a stereotype, they’re actual human beings that actually hold these views.
Partial rewrite is one of the fastest ways to hand off software maintainership, though. And vim needed broader maintainer / developer community.
Vim’s maintainer/developer community is more than sufficient. It’s a highly extensible text editor. Virtually anything can be done with plugins. You don’t need core editor changes very often if at all, especially now that the async stuff is in there.
You don’t need core editor changes very often if at all, especially now that the async stuff is in there.
Which required pressure from NeoVim, if I understood the situation correctly. Vim is basically a one-man show.
Thanks :) My attitude is to skip past crap drive-by comments as beneath notice (or linking). But I interpreted you to be saying FUD (about SystemD) that you ended up taking seriously? Any of those would be interesting to see if you happen to have them handy, but no worries if not.
Glad to have you back in the pro-Neovim (which is not necessarily anti-Vim) camp!
What is FUD is this sort of comment: the classic combination of comparing systemd to the worst possible alternative instead of the best actual alternative with basically claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap.
On the first point, there are lots of alternatives to sysvinit that aren’t systemd. Lots and lots and lots. Some of them are crap, some are great. systemd doesn’t have a right to be compared only to what it replaced, but also all the other things that could have replaced sysvinit.
On the second point, it’s just bloody rude. But it also shows you don’t really understand what people are saying. ‘I think [xyz] violates the unix philosophy’ is not meaningless. People aren’t saying it for fun. They’re saying it because they think it’s true, and that it’s a bad thing. If you don’t have a good argument for the Unix philosophy not matter, or you think systemd doesn’t actually violate it, please go ahead and explain that. But I’ve never actually seen either of those arguments. The response to ‘it violates the Unix philosophy’ is always just ‘shut up slashdotter’. Same kind of comment you get when you say anything that goes against the proggit/hn hivemind that has now decided amongst other things that: microsoft is amazing, google is horrible, MIT-style licenses are perfect, GPL-style licenses are the devil-incarnate, statically typed languages are perfect, dynamically typed languages are evil, wayland is wonderful, x11 is terrible, etc.
claiming everyone that disagrees with you is a ‘slashdot markov chain’ or similar idiotic crap
My claim is about the thoughtless shoveling of groundless rumors. Also I don’t think my quip was idiotic.
there are lots of alternatives to sysvinit that aren’t systemd
That’s fine, I never disparaged alternatives. I said: systemd is good and I’m annoyed that the grumblers said it wasn’t.
It’s not good though, for all the reasons that have been said. ‘Better than what you had before’ and ‘good’ aren’t the same thing.
seriously. If you don’t like systemd, use something else and promote its benefits. Tired of all the talking down of systemd. It made my life so much easier.
seriously. If you like systemd, use it and shut up about it. Tired of all the talking up of systemd as if it’s actually any better than its alternatives, when it is objectively worse, and is poorly managed by nasty people.
Have you watched the video this thread is about? Because you really sound like the kind of dogmatist the presenter is talking about.
If you like systemd, use it and shut up about it
Also, isn’t this a double-standard, since when it comes to complaining about systemd, this attitude doesn’t seem that prevalent.
No, because no other tool threatens the ecosystem like systemd does.
Analogy: it wasn’t a double-standard 10 years ago to complain about Windows and say ‘if you like Windows, use it and shut up about it’.
I see this kind of vague criticism when it comes to systemd alot. What ecosystem is it really breaking? It’s all still open source, there aren’t any proprietary protocols or corporate patents that prevent people from modifying the software to not have to rely on systemd. This “threat”, thr way I see it, has turned out to be at most a “ minor inconvenience “.
I suppose you’re thinking about examples like GNOME, but on the one hand, GNOME isn’t a unix-dogmatist project, but instead they aim to create a integrated desktop experience, consciously trading this in for ideal modularity – and on the other, projects like OpenBSD have managed to strip out what required systemd and have a working desktop environment. Most other examples, of which I know, have a similar pattern.
I think that the problem is fanboyism, echo chambers and ideologies.
I might be wrong, so please don’t consider this an accusation. But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it. Then one tries it and finding out that baseless prejudices were in fact baseless.
After that the assumption is that everyone else must have been doing the same and one is enlightened now to see it’s actually really cool.
I think that this group behavior and blindly copying opinions is one of the worst things in IT these days, even though of course it’s not limited to this field.
A lot of people criticizing systemd actually looked at systemd, really deep, maybe even built stuff on it, or at least worked with it in production as sysadmin/devop/sre/…
Yes, I have used systemd, yes I understand why decisions we’re taken, where authors if the software were going, read specs of the various parts (journald for example), etc.
I think I have a pretty good understanding compared to at least most people that only saw it from a users perspective (considering writing unit files to be users perspective as well).
So I could write about that in my CV and be happy that I can answer a lot of questions regarding systemd, advocate its usage to create more demand and be happy.
To sum it up: I still consider systemd to be bad on multiple layers, both implementation and some ideas that I considered great but then through using it seeing that it was a wrong assumption. By the way that’s the thing I would not blame anyone for. It’s good that stuff gets tried, that’s how research works. It’s not the first and not the last project that will come out sounding good, to only find out a lot of things either doesn’t make a difference or make it worse.
I am a critic of systemd but I agree that there’s a lot of FUD as well. Especially when there’s people that blame everything, including own incompetence on systemd. Nobody should ever expect a new project to be a magic bullet. That’s just dumb and I would never blame systemd for trying a different approach or for not being perfect. However I think it has problems on many levels. While I think the implementation isn’t really good that’s something that can be fixed. However I think some parts of the concept level are either pretty bad or have turned out to be bad decisions.
I was very aware that especially in the beginning the implementation was bad. A lot got better. That’s to be expected. However next to various design decisions I consider bad I think many more were based on ideas that I think to most people in IT sound good and reasonable but in the specific scenarios that systemd is used it at least in my experience do not work out at all or only work well in very basic cases.
In other words the cases where other solutions are working maybe not optimal, but that aren’t considered a problem worth fixing because the added complexity isn’t worth it systemd really shines. However when something is more complex I think using systemd frequently turns out to be an even worse solution.
While I don’t wanna go into detail because I don’t think this is the right format for an actual analysis I think systemd in this field a lot in common with both configuration management and JavaScript frameworks. They tend to be amazing for use cases that are simple (todo applications for example), but together with various other complexities often make stuff unnecessarily complicated.
And just like with JavaScript frameworks and configuration management there’s a lot of FUD, ideologies, echochambers, following the opinion of some thought leaders, and very little building your own solid opinion.
Long story short. If you criticize something without knowing what it is about then yes that’s dumb and likely FUD. However assuming that’s the only possible reason for someone criticizing software is similarly dumb and often FUD regarding this opinion.
This by the way also works the reverse. I frequently see people liking software and echoing favorable statements for the same reasons. Not understanding what they say, just copying sentences of opinion leaders, etc.
It’s the same pattern, just the reversal, positive instead of negative.
The problem isn’t someone disliking or liking something, but that opinions and thoughts are repeated without understanding which makes it hard to have discussions and arguments that give both sides any valuable insides or learnings
Then things also get personal. People hate on Poetteing and think he is dumb and Poetteing thinks every critic is dumb. Just because that’s a lot of what you see when every statement is blindly echoed.
That’s nice, but the implication of the anti-systemd chorus was that sys v init was good enough. Not all of these other “reasonable objections” that people are breathless to mention.
The timbre reminded me of people who say autotools is preferrable to cmake. People making a lot of noise about irrelevant details and ignoring the net gain.
But you writing this sounds like someone hearing that systemd is bad, therefore never looking at it, yet copying it.
No, I’m reacting to the idea that the systemd controversy took up any space in my mind at all. It’s good software. It doesn’t matter if X or Y is technically better, the popular narrative was that systemd is a negative thing, a net-loss.
In your opinion it’s good software and you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.
To give you an entirely different point of view, I’m surprised you don’t want to know anything about a key piece of a flagship server operating systems (taking that one distro is technically an OS) affecting the entire eco system and unrelated OS’ (BSDs etc.) that majorly affects administration and development on Linux-based systems. Especially when people have said there are clear technical reasons for disliking the major change and forced compliance with “the new way”.
you summed up the “anti-systemd camp” with “sys v init was good enough” even though people from said “anti-systemd camp” on this very thread disagreed that that was their point.
Even in this very thread no one has actually named a preferred alternative. I suspect they don’t want to be dragged into a discussion of details :)
affecting the entire eco system and unrelated OS’ (BSDs etc.)
BSDs would be a great forum for demonstrating the alternatives to systemd.
Well, considering how many features that suite of software has picked up, there isn’t currently one so that shortens the conversation :)
launchd is sort of a UNIX alternative too, but it’s currently running only on MacOS and it recently went closed source.
It violates unix philosohpy
That accusation was also made against neovim. The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about.
i don’t follow your reasoning. why is it relevant that people also think neovim violates the unix philosophy? are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?
are you saying that neovim conforms to the unix philosophy, and therefore people who say it doesn’t must not know what they’re talking about?
When the implication is that Vim better aligns with the unix philosophy, yes, anyone who avers that doesn’t know what they’re talking about. “Unix philosophy” was never a goal of Vim (”:help design-not” was strongly worded to that effect until last year, but it was never true anyways) and shows a deep lack of familiarity with Vim’s features.
Some people likewise speak of a mythical “Vim way” which again means basically nothing. But that’s a different topic.
vim does have fewer features which can be handled by other tools though right? not that vim is particularly unixy, but we’re talking degrees
The people muttering this stuff are slashdot markov chains, they don’t have any idea what they’re talking about
I’ll bookmark this comment just for this description.
Sometimes I read something like this and think ‘well yeah, obviously nobody is actually saying to take any advice they give to the most extreme possible point, use your judgement’. But then I remember all the code I’ve read (and this seems most common in Ruby for some reason) where people have literally factored out every single function until they’re almost all exactly 1 line long. And the code where they have written functions with four boolean arguments, used in half a dozen places with two combinations of boolean parameters. And the code that’s been hacked and hacked and hacked and hacked together to form a 5000-line shell script when they could have achieved the same result with a few hours and 200 lines of Python or something.
The traditional UNIX command line is a showcase of small components that do exactly one function, and it can be a challenge to discover which one you need and in which way to hold it to get the job done. Piping things into awk ‘{print $2}’ is almost a rite of passage.
I find this an interesting example if only because I think the Unix command line is a good example of how to do it right, because even if you don’t remember the command to use you can always just emulate most of the other commands with awk. And the general style leads to some really lovely software like gvpr, which I discovered yesterday.
Sometimes I read something like this and think ‘well yeah, obviously nobody is actually saying to take any advice they give to the most extreme possible point, use your judgement.
In other words, This means you’re not the audience—this is really aimed at those building the intuitions.
As you explain, the problem is that we don’t often show good judgement. It’s only after knowing the consequences that we tend to take action. From beginners, i’ve often been asked how and when and where to apply things. The problem is, it’s contextual, and I was hoping to try and give that context.
Rather than examining things through re-use, I wanted them to think about coupling. Instead of thinking about modules as collecting like features, as keeping them apart, and the whole ‘rewrites means migrations’ thing too.
I find this an interesting example if only because I think the Unix command line is a good example of how to do it right
Yes, and no. I mean, I thought the UNIX philosophy was a good idea until I realised how much git demonstrates it. Using flat files, small commands bolted together, fast c parts tied together with bash. It even has the unix thing where each file format or pipe output ends up being a unique mini language inside the program, too. It’s still awful to use.
It’s a good way to build an environment but, well, every command takes slightly different arguments, and things like autocomplete don’t come from inspection or understanding the protocol, and we’re still emulating vt100 terminals. There are good ideas but UNIX demonstrates their discovery more than their application.
On the other hand, plan9 demonstrates them quite well, and some of the problems too. It’s still not exactly pleasant to use, although wonderfully extensible. Plan9 leverages a consistent interface in more ways than UNIX did, exposing every service as a filesystem.
The notion of a uniform interface is also seen in HTTP, and for what it’s worth, how clients on plan9 move from one file to another is very reminiscent of following hypertext in a browser. There are good ideas in UNIX, but there are better examples of them.
Awk isn’t one of them, I mean, Awk’s great but it is one of the things, like tcl and bash, and perl that marked the end of ‘do one thing and do it well’, they were glue languages that grew features. Even bash 4 has associative arrays now.
UNIX has grep and egrep and ripgrep and at least three distinct types of regular expressions in common use. UNIX has a thousand different command line formats and application directory layouts. UNIX gave us autoconf.
I mean UNIX is great and all but we kept hacking shit on
In other words, This means you’re not the audience—this is really aimed at those building the intuitions.
What I meant is that my first reaction is ‘pointless article’, but that reaction is wrong! I think the article is good and necessary. More like it are necessary.
Yes, and no. I mean, I thought the UNIX philosophy was a good idea until I realised how much git demonstrates it. Using flat files, small commands bolted together, fast c parts tied together with bash. It even has the unix thing where each file format or pipe output ends up being a unique mini language inside the program, too. It’s still awful to use.
What? Git is not awful to use, it’s fantastic for all those reasons you just gave. You can dig into the internals of it without having to read any C. You pipe together those files into different formats yourself using a combination of standard utilities and git-x-y-z plumbing commands. What’s awful about that?
I have a much harder time ever getting anything to work in Mercurial, to be honest. Every time I try to use Mercurial it’s just the same as git except some of the commands have slightly more sensible names, everything is incredibly sluggish and lots of features just don’t exist or only exist if you turn on a million extensions.
And then once you have those extensions enabled, it’s just as confusing and inconsistent as git. Go look at the.. is it called queues? Something like that, I’ve forgotten. It’s necessary to get a lot what comes in git by default, and it’s way overcomplicated.
It’s a good way to build an environment but, well, every command takes slightly different arguments, and things like autocomplete don’t come from inspection or understanding the protocol, and we’re still emulating vt100 terminals. There are good ideas but UNIX demonstrates their discovery more than their application.
Of course different commands take different arguments, they do different things and have different purposes. Why would they all be the same? There’s nothing stopping you going and writing a patch for scp that lets it take -R to mean -r, something I always mistype the first time being used to other commands. I doubt they’d reject the patch.
Everything accepts --help and man pages exist.
The state of terminals is a rather different question. It’s just one of those things where it’s a bit of a local maximum. Trying to move to something that isn’t VT100 terminal emulation would require an enormous amount of effort for a relatively small benefit. Emulating VT100 terminals doesn’t really hurt except for a few little things like ctrl-i and tab being the same thing, but in some scenarios that’s what you want, some people want to be able to tab-complete with ctrl-i. But it really has nothing to do with the Unix philosophy anyway.
Autocomplete, well, you could define a format for --usage that is machine-parseable and defines the format for commands. Whenever you do x -o [tab] it calls MACHINE_READABLE_USAGE_OUTPUT=1 x --usage and then parses that result to see that -o is followed by a file, etc. etc. etc. Any other protocol you like. Maybe man pages could have an additional USAGE section with a machine-readable grammar for their usage. Getting shells to all agree on one particular way of doing things is the issue, not the ability to do something like that within the Unix command line model.
The whole idea of commands in a command line is arguably what it means to have ‘the unix command line’, given that they can be piped together and that they input and output text.
On the other hand, plan9 demonstrates them quite well, and some of the problems too. It’s still not exactly pleasant to use, although wonderfully extensible. Plan9 leverages a consistent interface in more ways than UNIX did, exposing every service as a filesystem.
I really don’t think that ‘everything is a file and every service is a filesystem’ is the right way to view the Unix philosophy. Plan9 doesn’t feel like the ultimate culmination of Unix to me. It feels like… I don’t want to be rude about it, I don’t mean this in a rude way, but it feels like a caricature of the Unix philosophy.
The Unix philosophy is implementing things in a standardised and accessible way so that you can use a general suite of tools to handle different things. It doesn’t have to be text, it’s just that it should be text if I can reasonably be text. ffmpeg still feels like a Unix command to me.
The thing that feels least-Unixy to me is audio on my system. Audio should definitely be done differently from how it is. I feel like I have almost no control over it. I want to be able to say ‘take the audio from here and put it into here then merge those audio streams and copy this one to this output then with the new copied output mix the channels to mono’ etc. etc. And not using some arcane GUI.
Awk isn’t one of them, I mean, Awk’s great but it is one of the things, like tcl and bash, and perl that marked the end of ‘do one thing and do it well’, they were glue languages that grew features. Even bash 4 has associative arrays now.
There’s a rule I have that in any system there will always be something complicated. It’s kind of broad, but look at any categorisation, any set of rules, any set of tools, there will always be a ‘misc’. It might be quite hidden or it might be just simply labelled ‘miscellaneous’. In any set of tools there’s always a tool that you use when all the other tools won’t work in all those random little situations that the others don’t fit. In any categorisation of anything, there’ll always be a few objects being categorised that just don’t fit into your neat hierarchy and need to be put into ‘other’.
Unix command line is no different. You have all the little useful tools and then you have awk because sometimes you just have to do something complicated. I mean that’s the reality, right? Sometimes you have to do something complicated.
UNIX has grep and egrep and ripgrep and at least three distinct types of regular expressions in common use. UNIX has a thousand different command line formats and application directory layouts.
‘There should be one — and preferably only one — obvious way to do it’ is the Python motto, not the Unix philosophy.
Unix has grep and egrep and ripgrep, sure. grep is the traditional Unix tool, egrep is an alias for grep -E using extended regular expressions. I assume these are even less actually-regular than grep’s regular regular expressions and thus slower. ripgrep is a modern reimplementation of grep in Rust that (as far as I know) only supports true regular expressions and is very fast as a result.
A better comparison would be between Perl-style and POSIX-style regular expressions, but these are actually really completely different things. You might even get away with arguing that one is really imperative and the other is really declarative. They both have good reasons to exist, there are definitely reasons to prefer either, they coexist and I think that’s a good thing.
There are many different command line formats? Not sure what that really means. Virtually everything today uses - before short commands, allows short commands to be written like -xcvf instead of -x -c -v -f, and supports --long-arguments. Yeah there are a few older commands like ps that support lots of formats in one commands, but that’s just backwards compatibility. The only systems that don’t have a few ugly corners for backwards compatibility are new ones that nobody has used enough yet. The only way to avoid them is to just throw out everything more than a year or two old. Please don’t turn Unix into front end web development.
Application directory layouts? No idea what that means sorry.
UNIX gave us autoconf.
autoconf is to many other build systems as GPL is to BSD licenses. Is it a pain for developers? Yeah, absolutely. But it’s not designed to be easy for developers. It’s designed so that you can give a tarball to a user and they just type ./configure [possibly some arguments]; make; make install. Just as GPL is designed to be friendly for end-users, while BSD is designed to be friendly for developers, autoconf is designed to be essentially invisible to end users. I don’t have to install cmake and deal with CMakeFiles.txt and other annoying crap when I just want to run ./configure && make && sudo make install.
And remember autoconf was not designed for people to compile a simple bit of C software onto one or two Linux distributions as it’s often used today, but to work around the inconsistencies and incompatibilities of dozens of different Unix operating systems. Today it’s unnecessary more than it’s bad. You really need about a 15 line Makefile to make all but the most complex C programmes, which you write once and never touch again. In those 15 lines you can quite easily and readably scan the included headers of each file to generate the dependencies between compilation units and handle all that stuff very easily.
Most of the problems people have with autoconf come from copy-pasting existing configurations and blindly hacking at them with absolutely no understanding of what is actually going on whatsoever. There are configuration switches in some programmes that haven’t been relevant since before the person who wrote them was born.
There is so so much to unpack here but frankly it seems like a pointless conversation
What? Git is not awful to use,
The command line tool that has markov chained manual pages as satire. The command line tool where the primary interface is stack overflow.
I’m really not sure we’ve used the same tool.
Plan9 doesn’t feel like the ultimate culmination of Unix to me.
Tell that to the UNIX authors, who wrote it.
egrep is an alias for grep -E using extended regular expressions
Other way around, buddy. GNU grep built stuff in. Unix, the system with a command line program called [
Also I think you’re also confusing the gnu userland with unix which went against most of the unix design ideas at the time.
The command line tool that has markov chained manual pages as satire.
Recipe for creating something funny: take something written in jargon, produce markov chain. “haha if u dnt understand the jargon its just lyk da real thing!!!111one”.
If you know algebraic geometry, the markov chain based on algebraic geometry papers is clearly a combination of complete nonsense, nothing like a real algebraic geometry paper. If you don’t know algebraic geometry, it’s probably indistinguishable from an algebraic geometry paper.
The makov chained git manual pages are funny. But they’re also clearly distinguishable from real git manual pages, which are actually very useful for using git. Your lack of sufficient intellect to understand a bit of jargon in order to use a leaky abstraction (as all abstractions are) doesn’t make the tool bad, nor does it make its interface bad.
Tell that to the UNIX authors, who wrote it.
Dennis Ritchie had nothing to do with Plan9 afaik.
Other way around, buddy. GNU grep built stuff in. Unix, the system with a command line program called [
What’s wrong with a command line program called [? Are you trying to make a point?
Aha Javascript, a language where {} + [] isn’t a type error, aren’t I so witty and observant that I can make such relevant criticisms xD. God you’re an idiot.
https://www.youtube.com/watch?v=21bUrFEX4jI
Conor talks a fair amount about this in this video too.
““Against the average user, anything works; there’s no need for complex security software. Against the skilled attacker, on the other hand, nothing works.””
I want to note that this is actually wrong. There’s all kinds of devices I can’t access or understand to the degree I want. This is especially true if it’s implemented in, totally or partly, silicon instead of software. The other times it requires esoteric knowledge of stuff like RF. These secrets stay secret for long periods despite billions of dollars of volume with hackers being in possession of the devices. They effectively solved the trusted client problem for those secrets with the one technique that works: defeating it costs piles of money. Money to tear the chips down, money on rare specialists, money on common specialists, and lots of time.
So, it’s true if you say “against a skilled attacker with necessary time and money.” In many cases, they might not have the time and money. Quick example: FBI paid Cellebrite something like $100-200 grand to crack that iPhone. So, if iPhone implements DRM, your data is secure on them if your enemy can’t afford or won’t spend $100-200 grand to get it. Then, they downgrade to having to use cameras aimed at the screen, retype documents by hand, try to get exploits into apps, try to con people, etc. For media, both quality expectations and laziness can ensure the “cam” copies have minimal impact on sales. So the DRM works in that case against main audience even with smart hackers in possession of the device.
Sounds pretty true to me? Even Apple with their untold billions can’t keep the iPhone secure enough that a couple of hundred grand can’t crack it.
Apple doesnt care about secuurity. Their Macs were a decade behind others in security features at one point. The iPhones have a few features that help. Neither those nor the OS are implemented in a rigorous way. You could say they just added a few things with average implementation effort. And that half-ass job on a few things takes several hundred grand a year to beat.
Now, lets say they invested in medium-to-high-assurance additions to CPU, security co-processor, and OS. They have the money to attempt it. Ive seen startups and CompSci folks on small budgets build each item. Apple might knock out whole categories of risk with a few million to tens of millions spent one time. They had tens of billions. They didnt do it. So, they dont care. That simple. Theirs stuff will sell anyway, too.
To be fair, they should also mark as “Not Secure” any page running JavaScript.
Also, pointless HTTPS adoption might reduce content accessibility without blocking censorship.
(Disclaimer: this does not mean that you shouldn’t adopt HTTPS for sensible contents! It just means that using HTTPS should not be a matter of fashion: there are serious trade-offs to consider)
By adopting HTTPS you basically ensure that nasty ISPs and CDNs can’t insert garbage into your webpages.
[Comment removed by author]
Technically, you authorize them (you sign actual paperwork) to get/generate a certificate on your behalf (at least this is my experience with Akamai). You don’t upload your own ssl private key to them.
Because it’s part of The Process. (Technical Dark Patterns, Opt-In without a clear way to Opt-Out, etc.)
Because you’ll be laughed at if you don’t. (Social expectations, “received wisdom”, etc.)
Because Do It Now. Do It Now. Do It Now. (Nagging emails. Nagging pings on social media. Nagging.)
Lastly, of course, are Terms Of Service, different from the above by at least being above-board.
No.
It protects against cheap man-in-the-middle attacks (as the one an ISP could do) but it can nothing against CDNs that can identify you, as CDNs serve you JavaScript over HTTPS.
With Subresource Integrity (SRI) page authors can protect against CDNed resources changing out from beneath them.
Yes SRI mitigate some of the JavaScript attacks that I describe in the article, in particular the nasty ones from CDNs exploiting your trust on a harmless-looking website.
Unfortunately several others remain possible (just think of jsonp or even simpler if the website itself collude to the attack). Also it needs widespread adoption to become a security feature: it should probably be mandatory, but for sure browsers should mark as “Not Secure” any page downloading programs from CDNs without it.
What SRI could really help is with the accessibility issues described by Meyer: you can serve most page resources as cacheable HTTP resources if the content hash is declared in a HTTPS page!
WIth SRI you can block CDNs you use to load JS scripts externally from manipulating the webpage.
I also don’t buy the link that claims it reduces content accessiblity, the link you provided above explains a problem that would be solved by simply using a HTTPS caching proxy (something a lot of corporate networks seem to have no problem operating considering TLS 1.3 explicitly tries not to break those middleboxes)
As much as I respect Meyer, his point is moot. MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. Some companies even made out of the box HTTPS URL filtering their selling point. If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’. We should be ready to teach those in needs how to setup it of course, but that’s about it.
MitM HTTPS proxy servers have been setup since a long time, even though usually for a far more objectionable purposes than content caching. […] If people are ready or forced to trade security for accessibility, but don’t know how to setup HTTPS MitM proxy, it’s their problem, not webmasters’.
Well… how can I say that… I don’t think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Beyond the obvious risk that the proxy is compromised (you should never assume that they won’t) which is pretty high in some places (not only in Africa… don’t be naive, a chain is only as strong as its weakest link), a transparent HTTPS proxy has an obvious UI issue: people do not realise that it’s unsafe.
If the browsers don’t mark as “Not Secure” them (how could them?) the user will overlook the MitM risks, turning a security feature against the users’ real security and safety.
Is this something webmasters should care? I think so.
Selling HTTPS MitM proxy as a security solutions is plain incompetence.
Not sure how to tell you this, but companies have been doing this on their internal networks for a very long time and this is basically standard operating procedure at every enterprise-level network I’ve seen. They create their own CA, generate an intermediate CA key cert, and then put that on an HTTPS MITM transparent proxy that inspects all traffic going in an out of the network. The intermediate cert is added to the certificate store on all devices issued to employees so that it is trusted. By inspecting all of the traffic, they can monitor for external and internal threats, scan for exfiltration of trade secrets and proprietary data, and keep employees from watching porn at work. There is an entire industry around products that do this, BlueCoat and Barracuda are two popular examples.
There is an entire industry around products that do this
There is an entire industry around rasomware. But this does not means it’s a security solution.
It is, it’s just that word security is better understood as “who” is getting (or not) secured from “whom”.
What you keep saying is that MitM proxy does not protect security of end users (that is employees). What they do, however, in certain contexts like described above, is help protect the organisation in which end users operate. Arguably they do, because it certainly makes it more difficult to protect yourself from something you cannot see. If employees are seen as a potential threat (they are), then reducing their security can help you (organisation) with yours.
I wonder if you did read the articles I linked…
The point is that, in a context of unreliable connectivity, HTTPS reduce dramatically accessibility but it doesn’t help against censorship.
In this context, we need to grant to people accessibility and security.
An obvious solution is to give them a cacheable HTTP access to contents. We can fool the clients to trust a MitM caching proxy, but since all we want is caching this is not the best solution: it add no security but a false sense of security. Thus in that context, you can improve users’ security by removing HTTPS.
I have read it, but more importantly, I worked in and build services for places like that for about 5 years (Uganda, Bolivia, Tajikistan, rural India…).
I am with you that HTTPS proxy is generally best to be avoided if for no other reason because it grows attack surface area. I disagree that removing HTTPS increases security. It adds a lot more places and actors who now can negatively impact user in exchange for him knowing this without being able to do much about it.
And that is even without going into which content is safe to be cached in a given environment.
And that is even without going into which content is safe to be cached in a given environment.
Yes, this is the best objection I’ve read so far.
As always it’s a matter of tradeoff. In a previous related thread I described how I would try to fix the issue in a way that people can easily opt-out and opt-in.
But while I think it would be weird to remove HTTPS for an ecommerce chart or for a political forum, I think that most of Wikipedia should be served through both HTTP and HTTPS. People should be aware that HTTP page are not secure (even though it all depends on your threat model…) but should not be mislead to think that pages going through an MitM proxy are secure.
HTTPS proxy isn’t incompetence, it’s industry standard.
They solve a number of problems and are basically standard in almost all corporate networks with a minimum security level. They aren’t a weak chain in the link since traffic in front of the proxy is HTTPS and behind it is in the local network and encrypted by a network level CA (you can restrict CA capabilities via TLS cert extensions, there is a fair number of useful ones that prevent compromise).
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browser don’t mark these insecure because to install and use a HTTPS proxy requires full admin access to a device, at which level there is no reason to consider what the user is doing as insecure.
Browsers bypass the network configuration to protect the users’ privacy.
(I agree this is stupid, but they are trying to push this anyway)
The point is: the user’s security is at risk whenever she sees as HTTPS (which stands for “HTTP Secure”) something that is not secure. It’s a rather simple and verifiable fact.
It’s true that posing a threat to employees’ security is an industry standard. But it’s not a security solution. At least, not for the employees.
And, doing that in a school or a public library is dangerous and plain stupid.
Nobody is posing a threat to employees’ security here, a corporation can in this case be regarded as a single entity so terminating SSL at the borders of the entity similar to how a browser terminates SSL by showing the website on a screen is fairly valid.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it (atleast when I wanted access to either I was in both cases instructed that the network is supervised and filtered) which IMO negates the potential security compromise.
Browsers bypass the network configuration to protect the users’ privacy.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Schools and public libraries usually have the internet filtered yes, that is usually made clear to the user before using it [..] which IMO negates the potential security compromise.
Yes this is true.
If people are kept constantly aware of the presence of a transparent HTTPS proxy/MitM, I have no objection to its use instead of an HTTP proxy for caching purposes. Marking all pages as “Not Secure” is a good way to gain such awareness.
Browsers don’t bypass root CA configuration, core system configuration or network routing information as well as network proxy configuration to protect a user’s privacy.
Did you know about Firefox’s DoH/CloudFlare affair?
Yes I’m aware of the “affair”. To my knowledge the initial DoH experiment was localized and run on users who had enabled studies (opt-in). In both the experiment and now Mozilla has a contract with CloudFlare to protect the user privacy during queries when DoH is enabled (which to my knowledge it isn’t by default). In fact, the problem ungleich is blogging about isn’t even slated for standard release yet, to my knowledge.
It’s plain and old wrong in the bad kind of way; it conflates security maximalism with the mission of Mozilla to bring the maximum amount of users privacy and security.
TBH, I don’t know what you mean with “security maximalism”.
I think ungleich raise serious concerns that should be taken into account before shipping DoH to the masses.
Mozilla has a contract with CloudFlare to protect the user privacy
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
AFAIK, even Facebook had a contract with his users.
Yeah.. I know… they will “do no evil”…
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
It’s bit naive for Mozilla to base the security and safety of milions of people world wide in the contract with a company, however good they are.
Cloudflare hasn’t done much that makes me believe they will violate my privacy. They’re not in the business of selling data to advertisers.
AFAIK, even Facebook had a contract with his users
Facebook used Dark Patterns to get users to willingly agree to terms they would otherwise never agree on, I don’t think this is comparable. Facebook likely never violated the contract terms with their users that way.
Security maximalism disregards more common threatmodels and usability problems in favor of more security. I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated as long as they are given something to say and retweet… yes, they have nothing to fear.
The problem is for those people who are actually useful to the society.
Cloudflare hasn’t done much that makes me believe they will violate my privacy.
The problem with Cloudflare is not what they did, it’s what they could do.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
But my concerns are with Mozilla.
They are trusted by milions of people world wide. Me included. But actually, I’m starting to think they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
Just because you think they aren’t useful to society (and they are, these people have all the important jobs, someone isn’t useless because they can’t use a computer) doesn’t mean we, as software engineers, should abandon them.
There’s no reason to give such power to a single company, located near all the other companies that are currently centralizing the Internet already.
Then don’t use it? DoH isn’t going to be enabled by default in the near future and any UI plans for now make it opt-in and configurable. The “Cloudflare is default” is strictly for tests and users that opt into this.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
I don’t believe the concerns are really concerns for the common user.
You should define “common user”.
If you mean the politically inepts who are happy to be easily manipulated…So in your opinion, the average user does not deserve the protection of being able to browse the net as safe as we can make it for them?
I’m not sure if you are serious or you are pretending to not understand to cope with your lack of arguments.
Let’s assume the first… for now.
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept. That’s obviously because, anyone politically inept is unlikely to be affected by surveillance.
That’s it.
they are much more like a MitM caching HTTPS proxy: trusted by users as safe, while totaly unsafe.
You mean safe because everyone involved knows what’s happening?
Really?
Are you sure everyone understand what is a MitM attack?
Are you sure every employee understand their system administrators can see the mail they reads from GMail? I think you don’t have much experience with users and I hope you don’t design user interfaces.
A MitM caching HTTPS proxy is not safe. It can be useful for corporate surveillance, but it’s not safe for users. And it extends the attack surface, both for the users and the company.
As for Mozilla: as I said, I’m just not sure whether they deserve trust or not.
I hope they do! Really! But it’s really too naive to think that a contract is enough to bind a company more than a subpoena. And they ship WebAssembly. And you have to edit about:config to disable JavaScript…
All this is very suspect for a company that claims to care about users’ privacy!
I’m saying the concerns raised by ungleich are serious and could affect any person who is not politically inept.
I’m saying the concerns raised by ungleich are too extreme and should be dismissed on grounds of being not practical in the real world.
Are you sure everyone understand what is a MitM attack?
An attack requires an adversary, the evil one. A HTTPS Caching proxy isn’t the evil or enemy, you have to opt into this behaviour. It is not an attack and I think it’s not fair to characterise it as such.
Are you sure every employee understand their system administrators can see the mail they reads from GMail?
Yes. When I signed my work contract this was specifically pointed out and made clear in writing. I see no problem with that.
And it extends the attack surface, both for the users and the company.
And it also enables caching for users with less than stellar bandwidth (think third world countries where satellite internet is common, 500ms ping, 80% packet loss, 1mbps… you want caching for the entire network, even with HTTPS)
And they ship WebAssembly.
And? I have on concerns about WebAssembly. It’s not worse than obfuscated javascript. It doesn’t enable anything that wasn’t possible before via asm.js. The post you linked is another security maximalist opinion piece with little factual arguments.
And you have to edit about:config to disable JavaScript…
Or install a half-way competent script blocker like uMatrix.
All this is very suspect for a company that claims to care about users’ privacy!
I think it’s understandable for a company that both cares about users privacy and doesn’t want a marketshare of “only security maximalists”, also known as, 0%.
An attack requires an adversary, the evil one.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
It shows very well your understanding of security.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
I have on concerns about WebAssembly.
Not a surprise.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
According to this argument, you don’t need HTTPS until you don’t have an enemy.
If there is no adversary, no Malory in the connection, there is no reason to encrypt it either, correct.
It shows very well your understanding of security.
My understanding in security is based on threat models. A threat model includes who you trust, who you want to talk to and who you don’t trust. It includes how much money you want to spend, how much your attacker can spend and the methods available to both of you.
There is no binary security, a threat model is the entry point and your protection mechanisms should match your threat model as best as possible or exceed it, but there is no reason to exert effort beyond your threat model.
The attacker described in threat model are potential enemies. Yorr security depends on how well you avoid or counter potential attacks.
Malory is a potential enemy. An HTTPS caching proxy operated by a corporation is not an enemy. It’s not malory, it’s Bob, Alice and Eve where Bob wants to send Alice a message, she works for Eve and Eve wants to avoid having duplicate messages on the network, so Eve and Alice agree that caching the encrypted connection is worthwile.
Malory sits between Eve and Bob not Bob and Alice.
Evidently you never had to debug neither an obfuscated javascript nor an optimized binary (without sources or debug symbols).
I did, in which case I either filed a Github issue if the project was open source or I notified the company that offered the javascript or optimized binary. Usually the bug is then fixed.
It’s not my duty or problem to debug web applications that I don’t develop.
Trust one who did both: obfuscated javascript is annoying, understanding what an optimized binary is doing is hard.
Then don’t do it? Nobody is forcing you.
As for packet loss caching at all, you didn’t reas what I wrote, and I won’t feed you more.
I don’t think you consider that a practical problem such as bad connections can outweigh a lot of potential security issues since you don’t have the time or user patience to do it properly and in most cases it’ll be good enough for the average user.
My point is that the problems of unencrypted HTTP and MitM’ed HTTPS are exactly the same. If one used to prefer the former because it can be easily cached, I can’t see how setting up the latter makes their security issues worse.
With HTTP you know it’s not secure. OTOH you might not be aware that your HTTPS connection to the server is not secure at all.
The lack of awareness makes MitM caching worse.
I really don’t think tabs in makefiles are a big deal. vim has expandtab off in Makefiles. I assume any other decent editor does so too.
This sort of smallholder, yeaoman-farmer artisanal website is just like organic food. It’s cheap to buy industrially-produced food. It promotes large food companies and reduces prices for consumers, but has incredible negative externalities - concentration of power, tasteless food, bad health, low nutrition. It’s cheap to start a Facebook page, a Medium site, a Wordpress site, or a Twitter account. Easy for the user, easy access to others on the same platform, immense profits for the ones that control the platforms. The market has eaten away at the commons that could have been the internet, just like it has turned millions of acres into corn fields for feeding chickens in factory farms. We should start thinking of cyberspace as space, space that has been exploited and colonized just like real land has been.
I admire those pushing for a simpler, healthier web. But unless we change our economic system, nothing major will change.
It’s not at all difficult to create a website like this. Far easier, in fact, than making a Wordpress site or Medium site. Why do people not? Partially I think it’s that people expect it to be very difficult, so they don’t try. But I think it’s mostly because there’s no advertisement and promotion of it! Nobody makes money from you putting a few static pages on github pages or a similar completely free host.
People encounter Facebook pages and Wordpress blogs every day, so that’s the first thing they think of when they need to make some sort of website.
Oh come on! I don’t think it’s easy to learn HTML/CSS, find a hosting provider, upload your files, configure the web server, buy a domain, set up domain records.
But I do agree with you that this process is much harder than it needs to be and it’s not easy to discover. And there’s not a reason why some money can’t be made in this process - so making it sustainable. Create a site that hooks up to a bunch of registrars and a bunch of hosts. Push-button deployment. Charge an extra few dollars on top. The value-add is centralized billing, less hassle, support.
I don’t think it’s easy to learn HTML/CSS
I learnt basic HTML when I was about 7. Here’s some HTML: <html><head><title>My personal website</title></head><body><h1>My Website</h1><p>Hi this is my website hope you like it.</p></body></html>. That’s all you need to make a ‘personal website’.
But I do agree with you that this process is much harder than it needs to be and it’s not easy to discover. And there’s not a reason why some money can’t be made in this process - so making it sustainable. Create a site that hooks up to a bunch of registrars and a bunch of hosts. Push-button deployment. Charge an extra few dollars on top. The value-add is centralized billing, less hassle, support.
The thing is, there’s no real reason why you should have to pay for very basic web hosting. Facebook pages and Youtube profiles and reddit profiles and such cost way more to provide than basic web hosting, but they’re free and basic web hosting is dollars every month. Any cost at all is a big barrier to entry.
Yes, basic HTML is not too hard. Perhaps it would be good to have local libraries providing easy/free hosting. It really can’t cost much to throw up a Geocities-esque webpage on the Internet.
Worth noting that s-expressions avoid a lot of legibility problems discussed in the article. If we look at the first example under the “providing immediate feedback” section where traditional notation looks like:
50.04 + 34.57 + 43.22 / 3
this would be expressed as:
(+ 50.04 34.57 (/ 43.22 3))
which would be hard to confuse with:
(/ (+ 50.04 34.57 43.22) 3)
A lot of people seem to have the impression that s-expressions are harder to read than traditional syntax, but I find the opposite to be the case. With s-expressions you have simple and predictable rules that remove a lot of mental overhead around figuring out what the code is doing.
Similarly just having the same precedence and associativity for everything would give you an easy-to-predict and easy-to-read syntax. This way you gain terseness, but you have to get used to the associativity of whatever mechanism you’re using, whereas s-expressions (or *shudder* XML, etc) are more portable, but require you to explicitly state the tree with more characters.
For example, right associative:
50.04 + 34.57 + 43.22 / 3
And for the sum of everything over three, it would be:
(50.04 + 34.57 + 43.22) / 3
This is the style that APL/J/K and various languages inspired by them tend to use (they also add different precedence for certain operations that take another operation as one of their inputs, such as fold). Many people use such languages as an enhanced calculator (there are plotting utilities made for them, etc). For example, in K, where division is % and assignment is ::
force: (6.67e-11*mymass*collidingmass)%radius*radius
yearlybill: 12*rent+electric+internet
Or with functions, where / is fold:
force:{[m1;m2;radius](6.67e-11*m1*m2)%radius*radius}
yearlybill:{[monthlyutilities]12*+/monthlyutilities}
Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.
I don’t know why people have such a problem with a + b + c / 3 meaning a + b + (c / 3). It’s just something you have to get used to, it’s not really that difficult and there are much bigger problems that need solving. But if it’s really such a big deal, just make it a function \frac{a + b + c}{3} in LaTeX is good enough for mathematicians, so frac(a + b + c, 3) should be good enough for programmers.
Then you get the situation that 1 * 2 + 3 and 3 + 1 * 2 mean different things, which is horrible, because people will always assume that they don’t.
I don’t know why people have such a problem with 1 * 2 + 3 and 3 + 1 * 2 meaning different things. It’s just something you have to get used to when using a different language, it’s not really that difficult and there are much bigger problems that need solving.
The universal rules of mathematical expressions create a strong precedent. People expect them to hold. They get confused when they don’t. Even if they are arbitrary.
I’m not aware of any language anywhere in all of programming or mathematics that uses different rules and has sustained any kind of popularity. Seems like a hard requirement to ever be successful in my experience.
They aren’t “universal”. See my other comment. Sustained any kind of popularity is a vacuous statement Forth is used extensively in embedded applications. Your calculator uses a left to right operator precedence and yet you don’t struggle to translate from PEMDAS or whatever system you use.
Funny because every mathematician I’ve talked to, and listened to about order ambiguity agrees with me and says you should put parentheses to disambiguate.
The reality is that because it is cultural means it does not matter if you have a solution to the problem if not everyone is using it. In my opinion abandoning order of operations is much simpler and the order is arbitrary, needlessly convoluted, and does not afford for the expansion of operators. You can make things abundantly clear by using polish notation.
- / 2x 3y 1
Before you throw your arms up in frustration yes there are proofs done in this format, and they’re great.
I suppose since it is universal that there are severe pedagogical deficiencies, which doesn’t surprise me terribly. Still would have been completely avoided with a simpler and clearer precedence system. It took me a while to realize that you were talking about strictly mathematicians whereas I was talking about all people. Apologies for my poor communication.
“Order of operations” have been an arbitrary curse on mathematics since their creation, different cultures don’t actually agree, in addition it restricts the creation of new operators. I’m not particularly invested in left to right or right to left, but either would be much simpler than the random format we have now.
Cultures that don’t use ÷ and × often don’t write sentences left-to-right and pages top-to-bottom. They might not even use arabic numerals.
I don’t see how it restricts the creation of new operators. Mathematicians seem to have no problem introducing new operators: ∧, ∨, →, ↔, dots, existing operators in circles and all sorts of silly new operators are used all over algebra without any real issue. If it’s not obvious from context, you put brackets in.
What order precedence does modulus have? Is it the same as division or should it be done first, or last? If we had a order precedence that can accomodate new operators this question wouldn’t need to be asked and I wouldn’t have to use parentheses which lets be honest is a hack.
Modulus isn’t a standard mathematical operator. But if you defined it, you could just say what its precedence is.
Same thing. Brackets = parenthesis, multiplication and division are done at the same time and so their order is whatever sounds better when reading out the abbreviation. What synonym of exponent does ‘O’ stand for?
Multiplication and division are not done at the same time. Orders I believe. http://www.math.harvard.edu/~knill/pedagogy/ambiguity/
Multiplication and division are always done at the same time (with left-associativity - a÷b÷c = (a÷b)÷c in mathematics and this follows over into programming languages that use * and / to emulate × and ÷.
2x/3y-1 is not well-defined notation. It’s not mathematics, because mathematics doesn’t use a slash in the middle of some linear text for division (it uses a horizontal line or ÷ depending on the context, although really depending on the level, because I haven’t seen anyone use ÷ since primary school), and it’s not any programming language I’m aware of either. Randomly writing down some text then claiming it’s ambiguous is pretty silly.
2 × x ÷ 3 × y - 1 is completely unambiguous, on the other hand: (((2 × x) ÷ 3) × y) - 1. Try putting it into google, or asking someone what 2 × 9 ÷ 3 × 2 - 1 is. Their answer is 11.
Mathematicians almost never use ÷ anyway, we write (2 x) / (3 y) where the line is horizontal (not possible on this platform as far as I can tell). But the same rule applies to addition and subtraction: 2 + x - 3 + y - 1 is universally agreed to be (((2 + x) - 3) + y) - 1.
Programming languages usually approximate ÷ and × with / and * for the sake of ASCII, so the same rules apply as with those operators. I’m not sure I know of any programming language where you can multiply variables by juxtaposition.
I once saw a proposal that it should be based on whitespace: 1+x * 3+y would be (1 + x) * (3 + y), while 1 + x*3 + y would be 1 + x * 3 + y. I thought it was quite a cute proposal, if perhaps prone to error.
Americans use a slash in the middle of linear text to mean division. You clearly didn’t even read the article. Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.
Americans use a slash in the middle of linear text to mean division.
Don’t think so.
You clearly didn’t even read the article.
The article has a bunch of monospace ASCII.
Just because you can do multiplication and division from left to right doesn’t mean that’s what people do.
It’s what literally everybody in the entire world does.
Odd, because I didn’t read the XKCD comic as making fun of security people for saying ‘voting machines won’t work, stay away’ at all. I read it as saying voting machines won’t work and that we should stay away from them. And to that I have to say: I totally agree. Voting works fine as it is: done by humans, counted by humans, entirely on paper with not a computer or network in sight.
Elections are really hard regardless if it’s done by computers or not, but we didn’t get to the point where we figured out the computer side of it at all. What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
We should really go back to limiting computer involvement in elections to UI, with the papertrail as the official record of votes. Involving computers in the actual process adds such a huge leap of complexity that it excludes most people from ever being able to verify results. Everyone can verify paper ballots.
Not really sure why you’d even want computers as UI. The ‘UI’ of a piece of paper you tick a box on really is quite good.
All I can say is that I’m glad that New Zealand has never (as least to my knowledge) involved computers in actual voting. Not even UI. I hope that the complete disaster that was our recent attempt at doing a census online[0] will help dissuade anyone from trying to do elections online as well.
[0]: Somehow they managed to simplify the census, put it online, reduce the number of questions and get fewer responses than before even though it’s still mandatory. What. And in return for significantly reducing the amount of information we get from the census, now they have a mandatory incredibly invasive survey of a randomly selected few percent of the population.
The reason for fewer responses may have little to do with technology and more to do with that notorious citizenship question.
What’s worse, is that adding computers into the mix was an excuse to go back on well-tested election related rules, such as secret voting. No, we can’t have voting over the internet or via mobile phones or anything like that.
There’s designs and protocols for that. We could even have diverse suppliers on the hardware side to mitigate the oligopoly risks. The question is, “Should we?” I think traditional, in-person methods combined with optical scanning is still the best tradeoff. The remote protocols might still be useful to reduce cost or improve accuracy on some mail-in votes, though.
I absolutely agree. Voting should be as simple for voters to understand as possible. Introducing an electronic device makes it auditable only to experts and even they might have a difficult job given the many layers at which things can go wrong (including hardware vulnerabilities).
One of the reasons people are advocating electronic voting is their lower cost. Personally, I think this argument is totally wrong. Cost is a factor but not the most important one - not having elections would be cheaper.
And let’s face it, how significant is the cost of having elections really? The 2008 general election in NZ cost about $36 million. Sounds like a lot, but that’s $12 million per year: 1/1719th of the Government’s budget. Spending 0.058% of the budget to ensure we have safe and fair elections is pretty insignificant really, it’s about as much as is spent on Parliament and its services and buildings etc, and about half as much as the Police earn the Government in fines from summary infringement notices (speeding tickets etc).
100% agree. I counted votes in the last federal election of Germany and that is some serious work, but totally worth it and very hard to tamper with.
It’s literally called 0.19. Do you know what a zero at the beginning of a version number means? Christ.
EDIT: more nuanced view below. I stand by what I said here though.
The issue is nothing in the way the language is marketed points at this.
You can start at http://elm-lang.org/ and end up installing the language before you even find out what version you’re installing and there’s certainly no mention of how unstable the language is release-to-release, not from the website, nor from its proponents. It’ll also take you a while before you discover its more complex limitations (e.g. the things you used to need modules for) and by that time you may already be heavily invested.
That is fairly bad. I’d hesitate before saying that people shouldn’t make pretty websites for things that are still in development, but they should certainly point out that they’re still in development.
Still, it looks like this is less an issue of being unstable and more an issue of people relying on implementation details. Changing implementation details marked as being implementation details you shouldn’t rely on isn’t really that bad. It’s certainly not bad for the web. In the Linux kernel it’d be a sin. I guess a language falls more on the kernel side than the average web library side of being careful about stability.