because it forces you to use a phone number, the author works for facebook and recommends against GPG, and they did not want people to use the F-droid free appstore and they are against federating with people hosting their own server
Meta-point: it is frustrating that despite the importance of this issue (a bad password manager can make you less secure), your options for getting answers are:
Someone needs to consolidate this information and keep it up to date (I don’t think it should be me. I can probably handle point 1, but don’t know enough about the area to trust my ability to adequately synthesize the material and relay it).
Someone needs to consolidate this information and keep it up to date
I agree. I was saving threads I saw to help with that. Then my bookmarks started disappearing (overflowing?). Anyway, I’m keeping the idea in mind since I’ll probably use one or more people’s advice myself in near future.
I’ve been happy-enough with LastPass - I can’t point to any reason beyond inertia, so really what I’m curious about in this thread: are there any significant differentiators that could sway a person to switch?
To my knowledge at least by staying mainstream there’s a team of individuals working on the product. Ive used LastPass for years, and while there have been issues in the past … There is a large userbase and community scrutinizing it.
Going the self hosted route negates alot of the large community, and trail by fire already accrued by legacy solutions like LastPass.
They also provide an export mechanism …
I’ve stuck with LastPass for a while. AFAIK, no security issues that I’ve judged to be significant. I appreciate that, compared to the other solutions that I know of, it seems to be widely compatible and simple to use on all platforms.
Only minor beef that I have is that the browser plugins, or at least the Chrome one, seems to have gotten slower and a little bit buggier over time instead of better and faster.
I use LastPass, but am not happy with it, as in the past, it had some pretty serious security issues:
I would switch to 1Password, but it does not have linux support (edit: it has a browser extension for linux, which is suboptimal, but probably better than Lastpass). I’ve almost talked myself into switching to Keepass, but I’ll have to find out how trustworthy the iOS version is.
I think this article has a small misconception about vgo (aka go modules): it doesn’t take the minimum version. go get always downloads the latest version. Thereafter the MVS algorithm picks the maximum of all the constraints.
EDIT: also I notice that it confuses the terms minimal and minimum. The Go algorithm is minimal because Russ Cox feels that nothing else can be taken away.
“The key to minimal version selection is its preference for the minimum allowed version of a module.” –Russ Cox
The maximum of the values of the constraints is the minimum of the versions allowed by the constraints.
It is more clear to call it the minimum, since the algorithm gives lower and lower versions as constraints are removed–it can only be pushed towards higher values by adding constraints. Conversely, the cargo algorithm “wants” the maximum version, and can only be dissuaded from it by adding constraints (or lockfiles).
It does take the minimum version. Yes, the name is minimal not minimum, but one of the property of that minimal algorithm is that it takes the minimum version.
An example should be clarifying. B is available in version from 1.0 to 1.10. A declares dependency on B >= 1.5. vgo resolves B 1.5, Cargo (and other package managers) resolves B 1.10.
Yep, I understand that. My point was that if A requires 1.5, C requires 1.2 and D requires 1.6 then the maximum of those is selected, i.e. 1.6. This has the side effect of requiring a deliberate upgrade act to get version 1.10. However the benefit is that if I run the resolution algorithm today then you run it next week when version 1.11 is released, we both get exactly the same set of dependencies and can reproduce one another’s builds.
Yes, I think we are all in agreement about what happens. The question is whether it is good. The drawback of vgo argued in the article is that B will inevitably get bug reports for 1.6 already fixed in 1.10. Another is that real world testing of B is spread along all versions from 1.0 to 1.10, while in Cargo most testing is against 1.10 while 1.10 is the latest.
Cargo (and other package managers) solve reproducibility with lockfile. Lockfile is admittedly not “minimal”, but apart from minimality it solves technical problem equally well.
I usually link to Betteridge’s Law when I write a post like this, but didn’t this time.
Apparently a significant portion of people found the title to be clickbait-y, but I thought it was a pretty straightforward question. Oh well!
This knee-jerk reaction against “clickbait” kind of annoys me. Imo there is nothing wrong with an article having a title that attempts to engage a reader and pique their interest. I would also much rather a title pose a question and answer it in the article, rather than containing the answer in the title itself. (The latter can lead to people just reading the title and missing any nuance the article conveys).
I agree. Clickbait really implies that the article has no meaningful content. If the article is actually worth reading, it’s not clickbait, it’s catchy.
“WebAssembly is not the return of Java Applets and Flash.”
Edit: I did enjoy the article, however.
Edit2: As site comment:
I had no idea what the “kudos” widget was, moved my mouse to it, saw some animation happening, and realized I just “upvoted” a random article, with no way to undo it. Wondeful design. >.<
It might work up to a certain point for buyers who otherwise would buy proprietary software. Their EULA’s are already ridiculous. I’ll note that the military has been known to sometimes just steal stuff if they need it. Here’s Army and Navy examples. In theory, they can make it classified, too, to try to block you proving it in court. At that point, you’re trying to beat them with DRM plus online, license checks to reduce the odds of that. That annoys regular customers, though.
This seems most doable with a SaaS solution.
buyers who otherwise would buy proprietary software.
exactly, the freedom to study and share is a pre-sales experience.
A case where the government settled for $50 million is a bit ambiguous–they suffered a consequence for that theft. If this license led the military to make regular payouts for violating licenses, I would count that as a partial success.
That was a case where they got caught. Most acts of piracy don’t get caught. More likely in organizations where it’s illegal to even discuss what they’re doing.
This idea comes up from time to time. It’s an old idea. Here are two rms articles that address it.
https://www.gnu.org/licenses/hessla.html
https://www.gnu.org/philosophy/programs-must-not-limit-freedom-to-run.html
Basically: if you’re evil enough to do evil stuff, violating copyright is something you won’t think is very evil at all. So even without the argument about how impossible it is to define evil, a copyright-based license isn’t going to stop anyone from doing evil.
I don’t think this holds up in countries with established rule of law. It’s easy to forget that you can sue the government in court in the US and then the government will stop (at least most of the time). It’s just the overton window has shifted so much that we only think of “evil” in terms of things that don’t happen in this day and age, when terrible things are happening all the time and continue to be enabled by technology.
If there’s anything that unities most people, it’s the fear of having all their assets frozen. And the spectrum of evil stops way before “evil mastermind with 1000 offshore accounts and 20 fake identies”.
I agree. Julian Sanchez made this point about the NSA/CIA recently, that the bulk of their abuses of power inside the USA are either legal. If they’re not obviously legal, they often fall into a legal grey area, but the right person in the chain of command said that they were legal.
Career government officials tend to have a habit of following most rules of the organizations they inhabit, but may do a lot of shady things that don’t obviously violate those rules.
I think I should have read my own links. rms also argues that such restrictions on use based on copyright are likely unenforcible. I don’t recall ever reading about a case where someone violated a license’s conditions on usage (e.g. using Java in a nuclear reactor) and was thus found to be violating copyright. Has that happened?
Also: trying to sue the US government for copyright infringement because they used some software to facilitate torture (for example) doesn’t seem to me like a fruitful approach. Maybe with some optimism something could be done about human rights abuses in the US, but going the copyright infringement path doesn’t seem likely to work.
Glad to hear it’s been addressed already by gnu. I was thinking along similar lines, like “Eh, I see the problem, but I don’t think giving up freedom zero is the answer.” Of course, I don’t have a good solution either, other than a better more democratic government with well-informed citizens and a functioning justice system.
Cool (and encouraging) analysis. Does this account for putting extensions in the default-extensions in a .cabal file?
It doesn’t. I’d expect at least the relative frequency to be similar, but this could very well increase how frequently extensions pop up. But I think that it’s probably much more common to enable extensions on a file-by-file basis than a project basis.
It’d be cool to see what effect adding those back has, but it’s probably not doable with the GitHub API.
I wouldn’t be surprised if adding it in pushed the most popular ones higher, because why worry about which files need OverloadedStrings? But that’s just a random guess, and it may be that not enough people use the feature for it to matter.
I should probably spend my free time this week on catching up on the prolog course.
However, this weekend, I hacked together a tool to use code coverage/property based testing tools to show you input-output pairs that take different paths through code (https://github.com/hyperpape/QuickTheories). I’d like to implement shrinking, and I’m also poking around at symbolic execution, as it seems like that’s the right way to implement a robust version of the tool. I’d also like to create an IDE plugin that lets me trigger this for methods in my code, and see if it’s as helpful as I imagine it being.
Right now at work: I don’t know what I’ll be working on before the day starts…
I’ve heard the “binary logs are evil!!!” mantra chanted against systemd so many times that it wasn’t funny anymore. It’s a terrible argument. With so many big players putting their logs into databases, the popularity of the ELK stack, it is pretty clear that storing logs in non-plaintext format works. Way back in 2015, I wrote two blog posts about the topic.
The gist of it is that binary logs can be awesome, if put to good use. That the journald is not the best tool is another matter, but journald being buggy doesn’t mean binary logs are bad. It just means that the journald is possibly not the most carefully engineered thing out there. There are many things to criticize about systemd and the journal, and they both have their fair share of issues, but binary storage of logs is not one of them.
Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?
The journald/systemd people don’t act like they have any clue what’s going on in the real world: people can’t use the tools they used to, and these tools evidently suck; Plain text sucked less, so what’s the plan to get anything better?
I don’t think that’s entirely reasonable. It’s converting a complaint about principle (“don’t do binary logs”) into a complaint about practice, and that makes a big difference. If journald is a bad implementation of an ok idea, that requires very different steps to fix than if it’s a fundamentally bad idea.
What you’re describing makes sense for people on the systemd project to say (“woah, people hate our binary logs, maybe we should work on them”[0]), but not for the rest of us trying to understand things.
[0] I fear they’re not saying that, as they seem somewhat impervious to feedback
I feel like @geocar is against binary logs as a source format, but not as an intermediate or analytics format. Even if your application uses structured logging, it can still be stored in a text file, for example as JSON, at the source. It can be converted to a binary log later in the chain, for example on a centralized logging server, using ELK, SQL, MongoDB, Splunk or whatever. The benefit is that you keep a lot of flexibility at the source (in terms of supporting multiple formats depending on the source application) and are still able to go back to the plain text log if you encounter a problem.
I’m not even against binary logs “as a source format.”
Firstly: I recognise that “complaints about binary logs” is directed at journald and isn’t the same thing about complaints about logs in some non-text format.
I think getting systemd in deep forced sysadmins to retool on top of journald and that hurt a lot for so very little gain (if there was any gain at all- and for most workflows I suspect there wasn’t). This has almost certainly put people off of binary logs, and has almost certainly got people complaining about binary logs.
To that end: I don’t think those feelings around binary logs are misplaced.
Some humility is [going to be] required when trying to win people over with binary logs, but appropriating the term “binary logs” to include tools the sysadmin chooses is like pulling the rug out from under somebody, and that’s not helping.
Thank you very much for clarifying. I agree that forcing sysadmin “to retool on top of journald” hurts.
No, it’s recognising that when enough people are complaining about “the wrong thing”, telling them it’s the wrong thing doesn’t help them. It just causes them to dig in.
What’s the right thing?
I think that’s the point of the bug…
Okay, so can we just assume all complaints about “binary logs” are just about these binary logs and get on with things?
As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.
and these tools evidently suck
For a lot of use cases, they do not suck. For many, they are a vast improvement over text logs.
what’s the plan to get anything better?
Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.
As soon as the complaints start to be about journald and not “binary logs”, and the distinction is made explicit, yeah, we can. It’s been four years, so I’m not going to hold my breath.
People complain about things that hurt, and between Windows and journald it should not be a surprise that “binary logs” is getting the flak. journald has a lot of outreach work to do if they want to fix it.
For a lot of use cases, [the tools] do not suck. For many, they are a vast improvement over text logs.
And yet when programmers make mistakes implementing them, the sysadmin are left cleaning up after them.
Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge. This allows people to get a lot of the advantages of binary logs with few disadvantages (and given how cheap disk is, the price is basically zero).
Stop logging unstructured text to syslog or stdout, and either log to files or to a database directly. Pretty much what you’ve been (or should have been) doing the past few decades, because both syslog and stdout are terrible interfaces for logs.
These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.
Are we really to interpret this as refuse to install any software that doesn’t follow this rule?
I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.
What do you mean by a “transparent structuring layer”?
Something to structure the plain text logs into some tagged format (like JSON or protocol buffers).
Splunk e.g. lets users create a bunch of regular expressions to create these tags.
Text logs have the massive material advantage that the sysadmin can do something with them. Binary logs need tools to do things, and the journald implementation has a lot of work to do.
For some values of “can do”, yes. Most traditional text logs are terrible to work with (see my linked blog posts, not going to repeat them here, again). Besides, as long as your journal files aren’t corrupt (which happens less and less often these days, I’m told), you can just use journalctl to dump the entire thing, and grep in the logs, just like you grep in text files. Or filter them first, or dump in JSON and use jq, and so on. Plenty of options there.
Most of the “big players” use a transparent structuring layer rather than making binary logs their golden source of knowledge.
Clearly our experience differs. Most syslog-ng PE customers (and customers of related products) made binary logs (either PE’s LogStore, or an SQL database) their golden source of knowledge. A lot of startups - and bigger businesses - outsourced their logging to services like loggly, which are a black box like binary logs.
These are directions to developers, not to sysadmins. Sysadmins are the ones complaining.
These are directions to sysadmins too. The majority of daemons support logging to files, or use a logging framework where you can set them up to log directly to a central collector, or to a database directly. For a huge list of applications, bypassing syslog has been there since day one. Apache, Nginx, pretty much any Java application can all do this, just to name a few things. There are some notable exceptions such as postfix which will always use syslog, but there are ways around that too.
You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.
I’m willing to whack some perl together to get the text log data queryable for my business, but you give me a binary turd I need tools and documentation and advice.
With the journal, you have journalctl, which is quite well documented.
Clearly our experience differs. Most syslog-ng PE customers…
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
outsourced their logging to services like loggly, which are a black box like binary logs.
I would be surprised to find that most people that use loggly don’t keep any local syslog files.
What exactly are you arguing here?
Plenty of options there.
And?
You can bypass the journal with most applications, some support that easily, some require a bit more work, but it has been doable by sysadmins all these years. I know, because I’ve done it without modifying any code.
Right, and the goal is to get people using journald right?
If journald doesn’t want to be used, what it’s reason for existing?
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
Yes.
I would be surprised to find that most people that use loggly don’t keep any local syslog files.
Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate. In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.
Right, and the goal is to get people using journald right?
For systemd developers, perhaps. I’m not one of them. I don’t mind the journal, because it’s been working fine for my needs. The goal is to show that you can bypass it, if you don’t trust it. That you can get to a state where your logs are processed and stored efficiently, in a way that is easy to work with - easier than plain text files. Without using the journal. But with it, it may be slightly easier to get there, because you can skip the whole getting around it dance for those applications that insist on using syslog or stdout for logging.
Do you believe that syslog-ng has even significant market share of users responsible for logging? Even excluding SMB/VSMB?
Yes.
I think you’re completely wrong.
There are a lot of Debian/RHEL/Ubuntu/*BSD (let alone Windows) machines out there, and they’re definitely not using syslog-ng by default…
Debian publishes install information: syslog-ng verus rsyslogd. It’s no contest.
A big bank I’m working with has zero: all rsyslogd or Windows.
Also, the world is moving to journald…
So, why exactly do you believe this?
In the past… six or so years, all loggly (& similar) users I worked with, never looked at their text logs, if they had any to begin with.
Most I’ve seen only keep local logs because they’re too lazy to clean them up, and just leave them to the default logrotate.
Okay, but why do you think this contradicts what I say?
You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.
The goal is to show that you can bypass it, if you don’t trust it.
Ah well, this is a very different topic than what I’m replying to.
I can obviously bypass it by not using it.
I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.
I think you’re completely wrong.
I think I know better how many syslog-ng PE customers there are out there (FTR, I work at BalaBit, who make syslog-ng). It has a significant market share. Significant enough to be profitable (and growing), in an already crowded market.
A big bank I’m working with has zero: all rsyslogd or Windows.
…and we have big banks who run syslog-ng PE exclusively, and plenty of other customers, big and small.
Also, the world is moving to journald…
…and syslog-ng plays nicely with it, as does rsyslog. They nicely extend each other.
You’re talking about people who have built a custom (text based!) logging system, streaming via the syslog protocol. The golden source was text files.
I think we’re misunderstanding each other… What I consider the golden source may be very different from what you consider. For me, the golden source is what people use when they work with the logs. It may or may not be the original source of it.
I don’t care much about the original source (unless it is also what people query), because that’s just a technical detail. I don’t care much how logs get from one point to another (though I prefer protocols that can represent structured data better than the syslog protocol). I care about how logs are stored, and how they are queried. Everything else is there to serve this end goal.
Thus, if an application writes its logs to a text file, which I then process and ship to a data warehouse, I consider that to be binary logs, because that’s how it will ultimately end up as. Since this warehouse is the interface, the original source can be safely discarded, once it shipped. As such, I can’t consider those the golden source.
If we restricted “binary logs” to stuff that originated as binary from the application, then we should not consider the Journal to use binary logs either, because most of its sources (stdout and syslog) are text-based. If the Journal uses binary logs, then anything that stores logs as binary data should be treated the same. Therefore, everything that ends up in a database, ultimately makes use of binary logs. Even if their original form, or the transports they arrived there, were text.
(Transport and storage are two very different things, by the way.)
I was simply trying to explain why people who complain about binary logging aren’t ignorant/crackpots, and are complaining about something important to them.
I never said they are. All I said is that storing logs in binary is not inherently evil, linked to blog posts where I explain pretty much the same thing, and give examples for how binary storage of logs can improve one’s life. (Ok, I also asserted that syslog and stdout are terrible interfaces for logs, and I maintain that. This has nothing to do with text vs binary though - it is about free-form text being awful to work with; see the linked blog posts for a few examples why.)
I think I know better how many syslog-ng PE customers there are out there
Or we just have different definitions of significant.
Significant enough to be profitable (and growing), in an already crowded market.
Look, I have an advertising business that makes enough money to be profitable, and is growing, but I’m not going to say I have a “significant” market share of the digital advertising business.
But whatever.
All I said is that storing logs in binary is not inherently evil
And I didn’t disagree with that.
If you try and re-read my comments knowing that, maybe it’ll be more clear what I’m actually pointing to.
At this point, we’re just talking past each other, and there’s no point in that.
I think this could use more motivation. I have never written a TODO app in js, but if the library illustrates something valuable, that might not matter.
One note about code: render() has repeated calls to Object.keys(obj).
Maybe “TODO” in the title is little unfortunate. It can be used for many things, render any kind of DOM elements. List, menus even single HTML elements. It’s like very small version of React or Vue.js with completely different architecture and logic.
Yeah, TODO has an association with toy apps, so I’d drop that word. Looking back at the readme, you might not need any other changes.
My experience is that this is just utterly wrong - I’m not even sure how to start to respond. Of course the best way to express a program fragment is a programming language. Of course the best way to think about a program is with a programming language. There is no distinction between programming and mathematics - of course you want to think mathematically about what you’re constructing, but the best languages for that are programming languages. Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program? Some programming languages distract from writing a good expression of your construction by making you specify irrelevant execution details, but the appropriate response is to avoid those languages. A good mathematical description of your algorithm is a program that implements your algorithm, given a decent compiler - and thankfully we’re good enough at those these days.
Lobster’s own Hillel expressed it really well just a few days ago:
So many software development practices - TDD, type-first, design-by-contract, etc - are all reflections of the same core idea:
- Plan ahead.
- Sanity-check your plan
It’s reasonable to want that “plan ahead” stuff to be incorporated in the program (design by contract, tdd), but using an external plan can have a large chunk of the benefit.
Maybe, but that’s not an argument for using a non-integrated, harder-to-check plan if you have the option of building the “plan” right into the program.
Because there’s design tradeoffs in specification. Integration is a pretty big benefit but also a pretty big cost, often reducing your expressiveness (what you can say) and your legibility (what properties you can query). As a couple of examples, you can’t use integrable specifications to assert a global property spanning two independent programs. You also can’t distinguish between what are possible states of the system and what are valid states, or what behavioral properties must be satisfied.
Strong disagree here; you get a lot more expressive power when using a specification language. Let me pose a challenge: given a MapReduce algorithm with N workers and 1 reducer, how do you specify the property “if at least one worker doesn’t crash or stall out, eventually the reducer obtains the correct answer”? In TLA+ it’d look something like this:
(\E w \in Workers: WF_vars(Work(w))) /\ WF_vars(Reducer)
=> <>[](reducer.result = ActualResult)
Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program?
I’ve written 100 line TLA+ specs that captured the behavior of 2000+ lines of Ruby. Keeping them in sync is not that hard.
I’ve written 100 line TLA+ specs that captured the behavior of 2000+ lines of Ruby. Keeping them in sync is not that hard.
Keeping code in sync with comments that literally live along side them is even more “not that hard”, and yet fails to happen on an incredibly regular basis.
In my experience, in any given system where two programmer artifacts have to be kept in sync manually, they will inevitably fall out of sync, and the resulting conflicting information, and confusion or mistaken assumption of which one is correct, will result in bugs and other programmer errors impacting users. The solution is usually to either generate one artifact from the other, or try to restructure one artifact such that it obviates the need for the other.
Keeping code in sync with comments that literally live along side them is even more “not that hard”, and yet fails to happen on an incredibly regular basis.
The difference is that if your code falls out of sync with your comments, your comments are wrong. But if your code falls out of sync with your formal spec, your code probably has a subtle bug. So there’s a lot more instutional pressure to update your spec when you update the code, just to make sure it still satisfies all of your properties.
The solution is usually to either generate one artifact from the other, or try to restructure one artifact such that it obviates the need for the other.
This has been a cultural problem with formal methods for a long time: people don’t value specifications that aren’t directly integrated into code. This has held the field back, because actually getting direct integration is really damn hard. It’s only in the past 15ish years that we’ve accepted that it’s alright to write specs that can’t generate code, and that’s why Alloy and TLA+ are becoming more popular now.
The difference is that if your code falls out of sync with your comments, your comments are wrong. But if your code falls out of sync with your formal spec, your code probably has a subtle bug
What justifies that assumption. Some junior will inevitably, in response to some executive running in with their hair on fire over some “emergency”, alter the behaviour of the code to “get it done quick” and defer updating the spec until a “later” that may or may not ever arrive. Coming along and then altering the code to meet the spec then re-introduces the emergency situation.
The fundamental problem here is that you’ve created two sources of truth about what the application should be doing, and you cannot a priori conclude that one or the other is always the correct one.
And what happens when that “emergency” fix loses your client data, or breaks your data structure, or ruins your consistency model, or violates your customer requirements, or melts your xbox, or drops completed jobs?
Yes, it’s true that sometimes the spec needs to be changed to match changing circumstances. It’s also seen again and again that specs catch serious bugs and that diverging from them can be seriously dangerous.
And what happens when that “emergency” fix loses your client data, or breaks your data structure, or ruins your consistency model, or violates your customer requirements, or melts your xbox, or drops completed jobs?
Nobody’s arguing that the spec is useless, just that the reality is that it does introduce risks that require care and attention and which cannot be handwaved away with “keeping them in sync is not that hard” because sync issues will bite organizations in the ass.
It’s only in the past 15ish years that we’ve accepted that it’s alright to write specs that can’t generate code, and that’s why Alloy and TLA+ are becoming more popular now.
It would be very helpful if you could at least generate test cases from those specs though. But then that’s why I work on a model based testing tool ;)
Proprietary of my employer (Axini). Based on symbolic transition systems, a Promela and LOTOS inspired modeling language and the ioco conformance relation. Related open source tools are TorX/JTorX/TorXakis. Our long term goal is model checking, but we believe model based testing is a good (necessary?) intermediate step to convince the industry of the added value by providing a way where formal modeling can directly help them test their software more thoroughly.
Really neat stuff. Thanks. I’ll try to keep Axini in mind if people ask about companies to check out.
This was a problem in high-assurance systems. All write-ups indicated it takes discipline. That’s no surprise given that’s what good systems take to build regardless of method. Many used tools like Framemaker to keep it all together. That said, about every case study I can remember found errors via the formal specs. Whether it was easy or not, they all thought they were beneficial for software quality. It was formal proof that varied considerably in cost and utility.
In Cleanroom, they use semi-formal specs meant for human eyes that are embedded right into the code as comments. There was tooling from commercial suppliers to make its process easier. Eiffel’s Design-by-Contract kept theirs in the code as well with EiffelStudio layering benefits on top of that like test generation. Same with SPARK. The coder that doesn’t change specs with code or vice versa at that point is likely just being lazy.
Why would you want two subtly different descriptions of your program that need to be kept in sync when you could have one description of your program?
For example to increase the number and variety of reviewers and thus reducing bugs.
A good mathematical description of your algorithm is a program that implements your algorithm
Are you thinking of a specific compiler? I agree that programmers think mathematically even when they use programming languages to express their reasoning, but I still feel some “impedence” in every language I use.
For example to increase the number and variety of reviewers and thus reducing bugs.
That seems very unlikely though. Even obscure programming languages are better-known than TLA+. More generally I can’t imagine getting valuable input on this kind of subject from anyone who wasn’t capable of understanding a programming language. I find the likes of Cucumber to be worse than useless, and the theoretical rationale for those is stronger since test cases seem further away from the essence of the program than program analysis is.
Are you thinking of a specific compiler? I agree that programmers think mathematically even when they use programming languages to express their reasoning, but I still feel some “impedence” in every language I use.
I mostly work in Scala so I guess that influences my thoughts. There are certainly improvements to the language that I can imagine, but not enough to be worth using something other than the official language when communicating with other people.
Somewhere out there, there is an exchange where Matthias Felleisen says something along the lines of dynamic languages allow a programmer to fruitfully apply two mutually incompatible, or at least hard to reconcile, typing disciplines to a program at once. I thought it was on Bob Harper’s blog, but I can’t find the source.
I find the biggest cases are limitations of specific static type systems rather than inherent to the concept. In particular, constructs like { foo: "blah", bar: 1} often have a perfectly good static type, it’s just difficult to express it given the limited state of record systems. Once I switched to Scala with lightweight case classes, a lot of the things I thought I needed dynamicity for in Python disappeared. Most of the remaining cases are covered by shapeless records (e.g. “this case class plus a UUID called "id"”), but the syntax is much more clunky than I’d like.
When it comes to partiality, this is adequately solved by casts, which almost every static language offers - though again perhaps not with a syntax that’s as light as it should be. In practice most static language users tend to find themselves avoiding casts, I think rightly, but making them more available might help with onboarding dynamic language users if nothing else.
I’ve never actually used Elm, but I believe their records are designed to deal with this phenomenon: http://elm-lang.org/docs/records.
I think I first read about Icon via Laurence Tratt’s Converge, which borrowed the idea of goal directed execution: http://tratt.net/laurie/research/pubs/html/tratt__experiences_with_an_icon_like_expression_evaluation_system/.
Correctly or incorrectly, Tratt concluded that backtracking in Icon was more difficult to use than one might hope, and reduced its scope in his language.
It’s a very clearly written article, I did not know about the goal directed execution, and the description is very approachable. It’s an interesting variant on handling control flow.
You’re not a full-stack developer until you design your own instruction architecture and written a compiler for it.
This is why I hate the meme: it turns a web developer into a “full-stack” developer when former made sense but later conflicts with previous meaning of software stack. It was the whole setup. They watered the term down much like crypto is cryptocurrencies instead of cryptography.
True full-stack developers include Chuck Moore, Niklaus Wirth, and the folks that did NAND2Tetris.
I think taking “full-stack” to mean “well-versed in all relevant technologies the engineering team will be customizing” is a lot less prone to strawman attacks.
There are plenty of ways to convey that idea without the whole “denying the existence of over half the levels of abstraction you build upon” part.
With all due respect to Wirth and Moore, who have both done amazing work, have either of them spent time in the last decade building with modern web technologies? Otherwise we can’t call them full stack developers ;)
Damnit, you got me! Lmao. Ok, pre-Web, full-stack developers. (pauses) That should be OK.
Now, we need some full-stack developers with Web. I’d start with people who have done hardware and at least RTOS projects. Then, look to see if they’ve done the web stuff.
By coincidence, I just today saw that CentOS does almost exactly what I was going to propose here: use a version that mixes a standard version number with a date:
7.1.1503 indicates major version 7, minor version 1, released in 2015-03.
https://en.wikipedia.org/wiki/CentOS#Versioning_and_releases
Just a small note: the script could be writen more consisley (and maybe in a more understandable way) by doing:
/baz
t
s/baz/elephants
wq
Writing .t. is like running vi ./file instead of vi file in a shell. And ed allows you to write and quit in the same command, just like :wq does in vi.
While I would have personally chosed emacs to do this task (using dired + keyboads marcos would be quite straightforward), I do agree that ed(1) is a quite helpfull and underestimated tool, especially when you embed it into a shell script with a here-doc. And despite apperances, it really isn’t that complicated, especially when you have a good man page (eg. OpenBSD’s) or have GNU Info + the ed manual installed, in case one needs to do something more esoteric.
And despite apperances, it really isn’t that complicated
UNIX V7 actually shipped interactive tutorials to learn ed(1) as part of learn. It’s unfortunate that there’s no convenient way to actually make use of those. You’d actually have to set up a PDP-11 emulator with V7 (though prebuilt images exist) and work with that, an environment where backspace doesn’t really work out of the box.
I’m a pretty mediocre emacs user, and poking around at the manual, I wasn’t qutie sure how to use dired to apply a macro to multiple files. I guess if you had a dired buffer with just the files you wanted, you could write the macro to open the file, do the operation, return to the dired buffer, then go on. Is that the idea?
While I’m no expert, that would have been what I would was thinking about. And despite first appearances, I don’t even think there’s anything too wrong about it either. I guess if you really wanted to be “save” you could write a script that processes all buffers on a stack by applying a function or a marco within them, but I don’t see the practical advantage. Whenever I did “start a macro in dired, open a file, edit it, close, move to next line (manually or via C-s)”, I didn’t have any problems with the method.
I’m sad after reading these comments.
I understand and respect his decision, and these comments themselves are the very evidence why he is right. How about having OpenSource simply about openness and source? Why do politics and ideologies have to always appear?
Maybe a new manifesto is needed, much like the Agile manifesto:
Why do politics and ideologies have to always appear?
Ideologies are always there. You only notice them when they’re different from your own.
Perhaps the point is that some people would like a safe space for focusing on technical matters rather than every single open source and free software community getting politically co-opted into a culture war.
Wanting a community focused on technical work and otherwise treating people equitably isn’t “apolitical”, you’re right, but that doesn’t make it invalid.
I choose to focus on helping people who came from a similarly disadvantaged background as myself but that’s something I do on my own time and money. I don’t impose it on the software communities I participate in.
I think we need the diversity of participants in free software represented in the communities and organizations. Were that the case, I think I would see more diversity in organizational structures, conduct standards, explicit goals, etc. What I perceive is a corporate-funded monoculture that is getting a bit creepy in the demands placed on others that don’t want to participate.
I’m also starting to notice a social intelligence / neurotypical punching-down in these threads where someone who is less adept at adopting the politically convenient argot of the day gets excoriated for trying to express their discomfort in their own words. It makes me deeply uncomfortable how the members of this community conduct themselves in these threads.
Some of my favorite communities are very engaged with the issues of access in ways that are in keeping with the zeitgeist (e.g. Rust) and do great work in part because of that. Some of my other favorite communities have a different emphasis or approach. I’d like them to co-exist peaceably and for people to do what they are most passionate about, whatever form that takes.
You may be right. But what I wanted to express is: I have my ideologies, just like anybody else does, but I believe that open source should only have one ideology, which is about software, collaboration, and not people, or other ideologies. For my taste even the GNU project is too political in many aspects, but on the other hand they have some great pieces of software and those are often governed and built in a great atmosphere. (I can recall a single notable “scandal” that reached me, but the community was generally welcoming, as it is for most software projects.)
Edit: Or to rephrase it even more: ideology is a system of thought covering most aspects of (human) life. I beleive everyone has a personal world view, that is closer to some pre-canned ideology than to others. Yet software projects should have ideologies of software lifecycle, not of human lifecycle, and those can be very well separated, as my personal life and life at my work can also be separated.
The etiquette of the global human civilization should be enough to cover the human-human interaction part of the collaboration, as it is for professional interaction in my experience with colleagues from all over the world. We share our vision about software, quality, and work together, while we may disagree on plenty of things, which have no place in the discussion about a software project.
Ideologies are always there. You only notice them when they’re different from your own.
This is a really interesting claim that I’m seeing more and more! I’d love to find some sources that explain the justification for it.
I’m genuinely sorry about that. :(
Unfortunately, some topics always bring out discussion that highlights the leaky abstraction of other lobsters as purely technical beings.
It’s the strongest argument against certain forms of content here.
One of the goals of open source movements is bringing in new people. I don’t think that’s a particularly contentious goal.
Outreachy is one organization that embodies particular ideas about how best to do that. It’s true those ideas are politically charged, but they’re in service of a goal that is agreed upon. So you can’t effectively pursue the goal of getting new people into open source without taking some kind of stance on the political questions.
Some political questions (what is the optimal US tax policy) are more or less irrelevant to open source. But others are so pervasive that they can’t be ignored, except by creating a tacit consensus. Even the idea that we should respect each other creates conflicts where people have sufficiently different ideas about what respect means.
These goals promote the production of “high quality programs” as well as “working cooperatively with other similarly minded people” to improve open-source technology.
source: https://en.wikipedia.org/wiki/Open-source_software_movement
Bringing a specific political agenda to an open source project violates the similarly minded people, or can have the effect of pushing away differently minded people. This is not what respect means in my opinion. I have worked a lot wit differently minded people, and we got along, as we were focusing on the goals. The goals were creating software, not changing society or a community. This moving goalposts is what is bad for OpenSource in my opinion.
“Apolitical” open source has turned out to be overwhelmingly white and male - significantly more than even the broader software industry. Reference.
I don’t think there’s any evidence that this demographic skew is deliberate. However once a group is dominated by a certain demographic then it’s easy for people to get the message that this is “not for them”, even if noone says this (and especially if some “bad apples” do).
I believe that there’s nothing about being white and male that makes the best possible open source software developers, so this demographic skew is a bug not a feature. I believe that the best possible open source community is the one with the biggest group of committed (to creating open source) people involved.
With this in mind, what can be done to encourage more diversity and bring more people in? There’s no evidence that the status quo (“focus on tech”, etc) will change by itself.
pushing away differently minded people
The only people the LLVM CoC will push out is people who repeatedly violate it (and by doing so that person is pushing out other people). Outreachy is bringing people in, it doesn’t push anyone out.
Someone decided to leave because aspects of the project no longer meshed with their political world view. You see this as “pushed out”, but I don’t really see who is pushing them here (unless there are some CoC violations we don’t know about or something, but AFAIK there aren’t).
I use this feature a lot. I think the article is spot on about the problems with the feature, but some people might be imagining it as more vigorous/violent than it is.
It’s hard to communicate this, but I find it’s requires substantially less speed than most people would put into shaking dice in their hand before rolling them. I’m thinking of people who roll dice one handed, not the people who go wild with two hands, like they’re shaking a cocktail.