Ah awesome link, and the web of data is very close to my heart.
I don’t think this is an endorsement of the literal semantic web. Sadly that effort was an abject failure for many well-documented reasons.
It is good to hear that people are still striving for the core ideas. An awful lot of people, including myself (shameless plug) have been chipping away at this for a while.
The perspective and comment @mattgreenrocks made about standardized protocols being important is something I share. Protocols, schema and API’s and all need to compose at very low functional levels. It is very important to take a holistic view and not go off half-baked on it.
Fingers crossed we’ll get there soon.
If you’re the sort of fella who liked C more than C++ in the past, check out https://ziglang.org. I find it has far fewer surprises and the compile-time features are simple without being weak. Also, it has an async without the “what color is your function?” problem.
Bad part is that you have to be competent enough to handle cleaning up resources. I find it easier than the competence required to get past Rust’s compiler.
I find it easier than the competence required to get past Rust’s compiler.
In all likelihood, that means that you’re not clearing up resources properly in Zig.
Cleaning up resources and memory safety are two completely different things.
You can have a completely memory safe language and forget to close a file.
Cleaning up resources and memory safety are two completely different things.
They’re different things when the resources in question are anything other than memory.
You can have a completely memory safe language and forget to close a file.
You can. Unless of course the file is an RAII object that can only go on the stack.
Why do you think so? For a bit of background, I’ve written some C before professionally, and resource handling was not really a problem for me – after clearing the initial hurdle (which took perhaps a year). I’ve also written a book on Rust, and although it was admittedly a rather mediocre one, I have studied the language a bit more than just an initial scratch.
Something that worries me about Zig is that it has powerful generics, but no RAII at all. It means that if you use a generic Vec(T)
and call pop
or shrink
or clear
or whatever on it, you also have to remember to deinit the elements that might have been removed. Generics in a low level language that doesn’t have Drop
seems like a good recipe for memory leak to me.
I do like many of Zig’s ideas :)
Here’s an old issue where they discuss this https://github.com/ziglang/zig/issues/782 – seems like they initially thought it would be too complicated and not in line with Zig’s philosophy but also that they’re gonna need some limited form of RAII to handle some parts of async.
I’ve written and read a lot of C, and worked with dozens of engineers on C codebases. I’ve also been to a fair share of conferences where I met world class programmers.
I have never met anyone I’d trust to write C without screwing it up. Manual resource handling never works. Valgrind and asan have helped a lot but they only work with adequate coverage, and even then…
The cynicism in this thread is pretty stunning. Described here is a plan to design and implement a fine-grained, library-level capability-based security system for a platform that’s seeing massive adoption. While this isn’t novel from a research perspective, it’s, as far as I can tell, the first time this stuff is making it down from the ivory tower into a platform that’ll be broadly available for application developers.
Lots of folks seem pretty scornful about the amount of stuff in current browsers, even though non-browser use cases are explicitly highlighted in this piece. There’s an implication that this is a reinvention of things in BSD, even though this approach is explicitly contrasted with OS process-level isolation, in this piece. There’s the proposed alternative of simply only using highly trusted, low-granularity dependencies, which is fair, but I think that ship has sailed, and additionally it seems like avoiding the problem instead of solving it.
I’m a bit disappointed here that the reaction to set of tools and standards that might allow us developers to write much safer code in the near future has garnered this kind of reaction based on what I see as a cursory and uncharitable read of this piece.
and additionally it seems like avoiding the problem instead of solving it.
Avoiding a problem is the best category of solution. Why expend resources to fix problems you can avoid in the first place?
I believe “avoiding” here was meant rather as “closing eyes to [the problem]”, than the more virtuous “removing the existence of [the problem]”.
In a world where even SQLite was found to have vulnerabilities, I believe any alternative solution based on some handwaved “highly trusted huge libraries” is a pipe dream. Please note, that in actual high trust systems, AFAIK limiting the permissions given to subsystems is one of the basic tools of the trade, with which you go and actually build them. A.k.a. “limiting the Trusted Computing Base”, i.e. trying to maximally reduce the amount of code that has access to anything important, isolating it as much as possible from interference, and then verifying it (which is being made easier by the amount of it needing verification being reduced through the previous step).
If you’re intrested in recent projects trying to break into mainstream with the capabilities-based approach, such as suggested in the OP IIUC, see e.g.: seL4, GenodeOS, Fuchsia, Pony language.
That said, I’m not an expert. I very much wonder what’s the @nickpsecurity’s take on the OP!
Been pulling long shifts so initially no reply. Actually, I agree with some that this looks like a marketing fluff piece rather than a technical write-up. I’d have ignored it anyway. Ok, since you asked… I’d like to start with a bottom-up picture of what security means in this situation:
Multicore CPU with shared, internal state; RAM; firmware. Isolated programs sharing these can have leaks, esp cache-based. More attacks on the way here. If SMP, process-level separation can put untrusted processes on their own CPU and even DIMM’s. The browsers won’t likely be doing that, though. We’ve seen a few attacks on RAM appear that start with malicious code running on the machine, too. These kinds of vulnerabilities are mostly found by researchers, though.
OS kernel. Attacked indirectly via browser functionality. No different than current Javascript risk. Most attacks aren’t of this nature.
Browser attack. High risk. Hard to say how WebAssembly or capability-based security would make it different given the payload is still there hitting the browser.
Isolation mechanisms within the Javascript engine trying to control access to browser features and/or interactions between Javascript components. Bytecode Alliance sounds like that for WebAssembly. Note this has been tried before: ADsafe and Caja. I think one got bypassed. Bigger problem was wide adoption by actual users.
So, the real benefits seem to be all in 4 leaving open 1-3. Most of the attacks on systems have been in 3. Protecting in 4 might stop some attacks on users of web pages that bring in 3rd-party content. Having run NoScript and uBlock Origin, they do argue that a lot of mess could be isolated at JS/WA level. Stuff does slip through often due to being required to run the sites.
You see, most admins running those sites don’t seem to care that much. In many cases, they could avoid the 3rd-party risks but bring them in anyway. The attack opening could come in the form of stuff from them that the user allowed which interacts with malicious stuff. Would capability-secure, WA bytecode help? Idk.
Honestly, all I can do is illustrate where and how the attacks come in. We can’t know what this can actually do in practice until we see the design, the implementation, a list of all the attack classes, and cross-reference them against each other. It’s how you evaluate all of these things. I’d like to see a demo blocking example attacks with sploit developers getting bounties and fame for bypasses. That would inspire more confidence.
Agreed, this is no silver bullet.
WASM is nice and has lots of potential. A lot of people when first seeing the spec also thought, “cool that’ll run on the server too over time”. There are use cases where it is interesting. I use it in my projects here and there.
But come on, grand plans of secure micro-services and nano-services made from woven carbon fiber nano-tubes, and a grandious name BytecodeAlliance. It’s like a conversation from the basement of numerous stoner parties. I’m pretty sure a lot of people here have been writing p-code ever since, well, p-code.
I don’t want to undermine the effort of course, and all things that improve tooling or foster creativity are greatly appreciated.
They should down the rhetoric and maybe work with the WASMR folk. Leaping out with a .org after your name comes with a great deal of responsibility.
There are lots more to do with WASM that needs doing than simply slapping it in a server binary and claiming it is a gift from heaven, when the actual gift from heaven was the Turing Machine.
All that said, please keep it up. The excitement of new generations for abstract machines is required, meaningful and part of the process.
Almost every programming language and set of tools uses third-party dependencies to some extent these days. There are clear benefits and increasingly clear costs to this practice, with both contingent on tooling and ecosystem. I agree that the node.js model of a sprawling number of tiny dependencies is a quite poor equilibrium; the Rust world seems to be taking this to heart by rolling things back into the standard library and avoiding the same kind of sprawl.
Regardless, third-party dependencies from heterogenous sources will continue to be used indefinitely, and having tools to make them safer and more predictable to use will probably bring significant benefits. This kind of tool could even be useful for structuring first-party code to mitigate the effects of unintentional security vulnerabilities or insider threats.
Saying “just avoid doing it” is a lot like abstinence-only sex ed: people are almost certainly still going to do it, so instead we should focus on harm reduction and a broad understanding of risks.
I’ve given this some more thought, and I think the reason this raises my eyebrows is a combination. It’s a long article on a PR-heavy site that tries to give the impression that this very young technology (WASM) is the only way of solving this problem, it doesn’t mention prior art or previous working attempts (try this on the JVM), and it doesn’t acknowledge what this doesn’t solve and the implications of this model (now you can’t trust any data that comes back from your dependencies, even if they’re well-behaved in the eyes of the Alliance).
Every new thing in security necessarily gets a certain amount of cynicism and scrutiny.
So, drawing a comparison to the JVM is fair.
IMO, the key differences are:
True, true. I was thinking something like an experiment to demonstrate how nanoprocess-ifying known vulnerable or malicious dependencies (within the JVM runtime) solves a security issue.
We are at the mocking level of acceptance! This means that it will be a thing, a real thing in just a matter of months!
Web is gonna Web, so we all know how this will work out.
“Web standards experts” continuously add stuff, and then the next decades is spent plugging the holes they opened, because once it’s added “we can’t break the web”.
If they were supposed to design a toothbrush, people would die.
Don’t we have the same problem on lobste.rs? A rating of -20 not really common around here but technically certainly possible. Maybe it would make sense to cap the negative level to somewhere?
On lobsters at least, I think a better idea would be to ask the downvoter to explain their reasoning in a comment private to the post. On the other hand, the current system of simply choosing a reason to downvote seems to work well. I rarely see downvotes here.
The fact that new comments don’t show their score until they’ve aged out a bit helps a lot. I’d say the effect is still present but greatly reduced in impact.
I always thought the down-vote concept a bit flawed as it is too easy to abuse.
Better not to have down-vote button imho.
It raises the bar for the use-case, so that the person wishing to down-vote and the original comment writer engage in a discussion instead; which seems healthy.
TBH the few times I used downvote I didn’t really check if it was already at some low number.
/runs off to go look up if own downvotes are shown somewhere
I’m kind of speechless. This looks truly genuine, and it makes me hopeful for the Linux kernel community (and all the other open-source communities it influences!) in a way I hadn’t predicted would ever happen.
I feel quite the opposite. I think it’s very sad that the reddit/twitter bandwagon of people that never actually contribute anything to open source but love to rip those that do to shreds have finally go to him.
This argument is a classic to be found in all of those discussions, but doesn’t hold any water.
The no-contribution Twitter crowd, right?
The list could go on and on. Find another angle, this one insults the intelligence of everyone at the discussion table. It only works if you don’t name names, if you do, you suddenly find that these people do contribute.
Finally, as someone managing a huge FOSS project with > 100 maintainers, I think this gatekeeping isn’t part of open standards. If your project is open and contribution is free to everyone, the barrier for criticising your projects methods and practices should be as low as the barrier for contributing anything else: as close to zero as possible. This is also important for practices to travel between projects.
And very recently, Alexander Popov, no lightweight by any measure. https://lwn.net/SubscriberLink/764325/09702eb949176f55/.
I’m sympathetic to Torvalds critique, if not his wording. It seems bizarre to just live with kernel code that uses uninitialized data structures and doesn’t cross check pointers and hope that some complex mechanism will ameliorate the problem.
Sure, his technical arguments were probably sound, as usual, but his abuse of Popov left the latter “emotionally dead for weeks”. Popov might’ve gotten the fixes made and thus the patch committed much sooner had Linus not abused him so the project also loses.
I am not convinced the patch ever became worthwhile - but I agree that Linus’s argument style was counterproductive and abusive.
I think you’ve got a selection bias in which criticism you’re seeing. From my perspective, the people who I hear take the most issue with Linus’s conduct are largely people who’ve quit kernel development as a result of it, or people with many years of OSS experience (such as myself).
I’m not an advocate of the absurdly excessive personal attacks for which Linus is known but at the same time I think quitting kernel development because of those personal attacks shows a lack of awareness of how OSS and specifically Linux operates. The reality is that Linux is Linus’s project and he’s incentivized to take your patches to make his project better, not to build a cooperative community. The community, if one could call it that, is incidental to Linus’s incentives.
If a person quits because of Linus’s behavior, it signals to me that their motivation had something to do with the approval of others and unfortunately those motivations are incompatible with Linux’s development process. Linus’s insults are just the tip of the iceberg when it comes to all the other problems that will arise due to the mismatched expectations. A famous example was when Ingo Molnar rewrote Con Konlivas’s CFS, or the multiple times grsecurity’s patches were rewritten by others.
Linus basically doesn’t owe anyone anything, and it’s not because he’s a jerk (though maybe he is), it’s because of the emergent social phenomena around OSS. Similarly, no one owes Linus anything. Many actors out there are using Linux to facilitate their billions of revenue and not paying Linus anything. If you write code and put it out there, there is no obligation that what you want to happen with it will happen, and it’s not unlikely that what happens with it will hurt your ego. If someone quits kernel development because of Linus’s behavior, they really should reexamine why they want to write OSS code in the first place and whether or not OSS development is the best way to reach their goals.
All that said I don’t necessarily disagree with Linus’s recent decision. It shows a conscious effort on his part to change the strategy used to sustain the project. I’m only criticizing those who may have mismatched expectations of the potential outcomes in OSS work.
The reality is that Linux is Linus’s project and he’s incentivized to take your patches to make his project better, not to build a cooperative community.
Linus is an employee of the Linux Foundations, a nonprofit corporation with stakeholders like Red Hat, Google, and Intel, and he owes his employers their money’s worth as much as anybody else who works for hire.
I would agree with you if this was still the Linux that wasn’t going to become a big thing like Hurd. But Linus chose to remain the project lead even as the job became less technical and more political, and when they decided to pay him to work on it full-time, he accepted. There’s money, there’s a trademark, and there’s inertia undermining any claim that the situation is totally voluntary and that nobody owes anybody anything.
And that’s before we even consider the fact that there is a huge and informal web of soft obligations because human beings don’t work the way you say they do.
Linus owns the trademark and even if he didn’t work for the Linux Foundation he would still be the maintainer of Linux. The entire development structure is centered on him. No company could successfully and sustainably fork Linux if Linus decided to operate against their goals.
I made no claim as to how human beings work. My claim is simply that OSS is essentially a free-for-all and those that aren’t acutely aware of that and incorrectly treat OSS like a traditional organization that has inbuilt obligations to their well-being will be inevitably burned. Linux is not a cathedral, it’s a bazaar. http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/
No I’m talking about the much larger crowd of people applauding him for ‘moderating himself’ and other such nonsense. I’m talking about the huge crowd of people that act like every message he sends is a scathing personal attack on someone for indenting something incorrectly.
Well, perhaps it’s just that our perception of the (ultimately utterly pointless) social media reactions is colored by our preconceptions. I’ve mostly seen people praise and defend him.
I’m not sure what the resistance is about. It seems to me that all these CoCs are just a way of codifying “don’t be an asshole”, and it’s perplexing that people get so angry about it. But it cannot be that, right? Surely you and others are not against “don’t be an asshole” as a work ethic?
If not that, then what? I’ve listened to Sam Harris quite a lot recently, so I have some feeling about the problem of identity politics et al, especially in the US. I’m just still not exactly convinced, because I don’t see it happening. Perhaps it’s not that big a problem in Europe?
I’m not sure what the resistance is about. It seems to me that all these CoCs are just a way of codifying “don’t be an asshole”, and it’s perplexing that people get so angry about it. But it cannot be that, right? Surely you and others are not against “don’t be an asshole” as a work ethic?
I think a lot of this is related to the “hacker identity” which is strongly tied up with counterculture, stepping outside/dismissing/rebelling against social conventions. For example, in the genre of cyberpunk (which I’d consider a hacker’s dream world, even if it’s a dystopia) there is almost no law and order or even anarchy, everyone does their own thing and your skill is the only thing that counts.
So I think a lot of the reaction is “who are you to come in and police the way we’ve always been doing things?”. I suppose a lot of these people claiming are seen as outside intruders enforcing their “outside” morals on the hacker “community” at alrge (if there is even such a thing). For this reason I think it’s important that people like Linus, who are truly regarded as being “from” the community, are signaling that change needs to come. We’re all human, not machines.
I think there are two big issues. One is that “hacker culture” has historically attracted people with social issues. I know that it appealed to me as an unpopular, nerdy, shy kid: I didn’t have a lot of outlets, so computers and the Internet helped me form my personality. That’s great; I don’t know where I’d be without it. That leads into the second issue, though, which is that it’s utterly dismissive of all the traditions we call the “humanities.” I am lucky, I think, in that I’ve always been “into” literature, philosophy, theology, and so on, and could balance my computer-nerddom with those fields. (In fact, my only college degree is a BA in English.) Without that tempering influence, it’s very easy to get caught up in an aspiration-to-Spock sort of behavioral cycle.
Surely you and others are not against “don’t be an asshole” as a work ethic?
Who defines what an ‘asshole’ is?
My problem is that Codes of Conduct explicitly and implicitly privelege some groups but not others for protection, and that even when de jure they protect some groups, de facto they do not.
Moreover, I find the idea that we should generally value social etiquette more than technical excellence to be troublesome. Are there people who are so socially rude that they should be shunned? Sure. But should shunning be our go-to? I don’t think so.
I find the idea that we should generally value social etiquette more than technical excellence to be troublesome.
Is that what’s actually happening? I thought this was about valuing both.
It seems to me that all these CoCs are just a way of codifying “don’t be an asshole”, and it’s perplexing that people get so angry about it.
I can’t speak for all opponents, but for me at least I disagree with it being “codified”, or rather formalized what essentially isn’t formal. People contributing to software won’t just suddenly become good people because there is a CoC. It’s like wanting to prevent a husband from abusing his wife by requiring him to hold up his hands whenever they are in the same room.
What I usually fear from these kinds of things is that they one the one hand subvert genuine communities, customs and practices, while possibly encouraging the harmful parts of these communities to discreetly and dishonestly live on, much harder to fight or criticize. Essentially it’s taking a passive stance towards real issues people should actively and collectively oppose – say harassment or insulting people.
Turning issues of civility and decency into rules, especially if these are too vague, always bears the danger of being on the one hand abused by those trying to evade then (“oh, that’s not what I meant”) and on the other hand by those enforcing them (“rules are rules”)…
But then again, I’m not a Linux contributer (although I would be honored to managed to get there one day), and I can just hope it turns out well for them, and the issue doesn’t get instrumentalised.
People contributing to software won’t just suddenly become good people because there is a CoC. It’s like wanting to prevent a husband from abusing his wife by requiring him to hold up his hands whenever they are in the same room.
I find that analogy deeply flawed (and somewhat bizarre). The CoC doesn’t require anyone to do anything as ridiculous as hold their hands in the air while in the same room as their wife.
Essentially it’s taking a passive stance towards real issues people should actively and collectively oppose – say harassment or insulting people.
So you’re saying that rather than having a CoC it would be better if, every time Linus or some other kernel developer was offensive, other developers stepped in and told them off? How do you make that happen? Do you not think the CoC is a step towards making that happen?
The CoC doesn’t require anyone to do anything as ridiculous as hold their hands in the air while in the same room as their wife
Of course not literally, but for many people they have to adjust their own behavior in unusual (and often enough unknown) ways. I’ve experienced communities on the Internet which banned their users for using any phrase that has to do with eyesight disabilities (e.g “I can’t see what’s wrong”), and most people simply just didn’t know about this.
And the point of my analogy still remains, the issue with the husband beating his wife isn’t that he can but that he wants to, consciously or unconsciously. Just saying “Don’t” won’t help solve the problems in the long term, just suppresses them.
So you’re saying that rather than having a CoC it would be better if, every time Linus or some other kernel developer was offensive, other developers stepped in and told them off?
The way I see it, this would obviously be better. This means that the community has a strong sense of internal solidarity and openness that they manage to enforce by their own means. Essentially this means that the goals of the CoC come naturally and authentically to the members.
How do you make that happen? Do you not think the CoC is a step towards making that happen?
I really can’t say, nor do I know. Nothing I’m saying is authoritative or really substantial, I’m just trying to give a more reasonable criticism of codes of conducts than certain other people in this thread.
Just saying “Don’t” won’t help solve the problems in the long term, just suppresses them.
Suppressing the problem does help, though. I don’t want to continue the husband/wife analogy as I find it distasteful, but once you establish norms of good (or at least better) behaviour, people do adjust. And by having the CoC, even though it doesn’t cover every case, it sets up some basic guidelines about what will and won’t be accepted - so you remove the excuse of “no this is fine, everyone talks this way, deal with it” from the outset. This alone can make people who otherwise feel vulnerable, and/or belong to marginalised groups etc, to feel more comfortable.
I’d prefer we didn’t need CoCs, but clearly we need something to make development groups less unpleasant to participate in. And even if you don’t think they’re effective, I can’t see how they hurt.
I guess we just have different views on the question if issues are to be addressed or suppressed (in my eyes willfully ignored). But that’s fine. There’s more I could say, but I won’t for the sake of brevity, except that a CoC should (imo) be always the last resort when everything else has failed. A kind of martial law. Since they aren’t just guidelines or tips, but can justify very drastic behavior.
I guess we just have different views on the question if issues are to be addressed or suppressed
I think that’s a mis-characterization. We both seem to think that offensive behaviour should be addressed by other people stepping in as appropriate, but I see the CoC as prompting this to happen, whereas you are saying that you don’t know how to make it happen and that the existence of a CoC will make people suppress their bad behaviour and that this is bad (for some reason which I’m not clear on).
I would say that the existence of a CoC may make people suppress an urge to spout off an offensive rant against another developer, and that’s a good thing. I also think that it lends a stronger position to anyone who does step in when offensive behaviour does occur (despite the existence of the CoC). I think it’s more likely that, rather than completely suppressing offensive behaviour, the CoC causes more people to respond and challenge such behaviour, which is the outcome that we both seem to think is ideal (and which leads to less of the behaviour occurring in future). Now if you disagree that the CoC will lead to that happening, that’s fine, but:
A kind of martial law. Since they aren’t just guidelines or tips, but can justify very drastic behavior.
That’s just ridiculous. A CoC is nothing like martial law. The only behaviour it justifies is that of stepping in to control other, offensive, behaviour:
… to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
Maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.
These are the only behaviours that are actually “justified”, to use your word, rather than expressly prohibited, by the CoC. I think saying these are “drastic” and comparing to martial law is clearly an immense level of exaggeration.
but I see the CoC as prompting this to happen, whereas you are saying that you don’t know how to make it happen and that the existence of a CoC will make people suppress their bad behaviour and that this is bad (for some reason which I’m not clear on).
I don’t want this to go on for too long, so I’ll just quickly clarify my two main points:
So it’s not that it’s irrelevant, but that it may go wrong, specifically when applied to quickly or without introduction. But again, maybe not.
A CoC is nothing like martial law.
You’re right, I should have put “martial law” in quotes. My point is that it shouldn’t be a permanent solution, but as you said try to push a community in a better direction, “stabilize” a situation so to speak. Even here between us we see how different background, invoke different images and connotations with examples as simple as metaphors.
You’re right, I should have put “martial law” in quotes. My point is that it shouldn’t be a permanent solution
Ok, I understand now what you meant.
banning obvious misbehavior won’t change people
I am not sure that I agree with this. For one thing, “obvious misbehaviour” may be generally understood but is not obvious to everyone. You will see many people arguing that Linus’ rants are perfectly acceptable, for various reasons. By making a clear statement that “behaviour X is wrong” you are removing the doubt.
at worst invite them to a passive aggressive game of trying to evade the rules while still trying to be mean or hurtful
I believe that the Contributors’ Covenant deliberately avoids trying to produce an exhaustive list of disallowed behaviour, precisely so that the rules can’t be avoided in this way. Yes, there will always be some problematic individuals who push the limits regardless. But is it better that they are at least constrained in this way, rather than being able to be openly offensive? I think so. And I think this anyway is somewhat irrelevant to the issue of a CoC; even if you generally enforce good behaviour without a CoC, there can always be trouble-makers who test the limits.
a CoC is a principally passive stance, where active action is necessary trying to address and resolve issues. Suppressing discussion where necessary may (again) lead to a overall harmful atmosphere
A CoC is just a document, so it is passive in that sense, yes. But it doesn’t prevent any affirmative action - it encourages it.
What this seems to boil down to, if I’m reading you correctly, is that you’re saying that it’s better to allow offensive behaviour to occur - and then to have the perpetrator reprimanded - than it is to document what is considered offensive behaviour so that it will be deliberately avoided. I cannot see how that is better. If someone’s response to a rule is to try to find underhanded ways to work around that rule, what difference does it make whether the rule is written down or enforced only by-the-way?
For one thing, “obvious misbehaviour” may be generally understood but is not obvious to everyone. You will see many people arguing that Linus’ rants are perfectly acceptable, for various reasons.
Ok, but these people would say these rants are good because they are brutal or some kind of “not nice”. Nobody, or at least nobody I’ve seen, claims that Linus is always “kind” and “civil” and people are just misunderstanding him.
Yes, there will always be some problematic individuals who push the limits regardless. But is it better that they are at least constrained in this way, rather than being able to be openly offensive? I think so. And I think this anyway is somewhat irrelevant to the issue of a CoC; even if you generally enforce good behaviour without a CoC, there can always be trouble-makers who test the limits.
I get your point. I still belive there to be a difference between the two cases – maybe not immediately visible, but on a more symbolic level. In the first case the trouble-maker stands in conflict with the (official, formal) document and will try to defend his or her behavior on semantic issues and misreadings, while in the second case the conflict is more direct with the “community”. This is not to say that no rules should be made or no behavior should be sanctioned – just that in the long term this should be a internal and organic (eg. self-made (maybe even unofficial) “community guidelines”, that serve to introduce new members) process not ordained from above.
you’re saying that it’s better to allow offensive behaviour to occur - and then to have the perpetrator reprimanded - than it is to document what is considered offensive behaviour so that it will be deliberately avoided
I wouln’t phrase it that way, since to me many of these terms are too vague. Anyways, in my eyes this seems to be unrelated to CoC: from my experience most people encounter a CoC not by reading it before they do anything, but by people using it as “legislation” – they make a mistake and are then banned and excluded – often enough permanently because it’s just the easiest thing for moderators to do. Either way, the “offensive act” has taken place – with a quick and formal process leading to confusion on the one side and a honest attempt to point out what a person has done (on a case-to-case basis) in the other.
For one thing, “obvious misbehaviour” may be generally understood but is not obvious to everyone. You will see many people arguing that Linus’ rants are perfectly acceptable, for various reasons.
Ok, but these people would say these rants are good because they are brutal or some kind of “not nice”. Nobody, or at least nobody I’ve seen, claims that Linus is always “kind” and “civil” and people are just misunderstanding him.
That’s my point. The CoC makes it clear that we are expected to be civil. Therefore if anyone goes on an uncivil rant, you can’t claim that it’s ok because [whatever reason], as it’s been explicitly stated that it’s not acceptable. You’re making the community rules about what certain unacceptable behaviour explicit, and removing the inevitable and fruitless debates over whether it’s ok to swear at someone for submitting a bad patch etc.
Whereas now, people don’t understand that it’s not ok to be uncivil.
Either way, the “offensive act” has taken place – with a quick and formal process leading to confusion on the one side and a honest attempt to point out what a person has done (on a case-to-case basis) in the other.
Other than anecdotal examples where it may have been the case with some unknown number of other projects (links to relevant mailing list posts would be interesting to see), I don’t see any evidence that a CoC will necessarily lead to confusion nor to people being wantonly banned from participating for one-off infractions; I certainly don’t think it’s designed or intended for that.
I’ve experienced communities on the Internet which banned their users for using any phrase that has to do with eyesight disabilities (e.g “I can’t see what’s wrong”), and most people simply just didn’t know about this.
As a visually impaired person with several friends who are totally blind, this strikes me as ridiculous. I have no problem with such everyday use of “see” and related words. I think my blind friends would agree.
What’s so bad about people in positions of power to stop being abusive? Isn’t it something to applaud?
It’s possible the CoC, the heart-warming public statement and Linus taking a time-out is a double-blind.
Well he could actually be a little burnt out as well, which is also totally fine.
I totally support him which ever way the pendulum falls.
I like parts of the transaction model. It’s how I would/do do it - the commit collection, batching and asynchronous reply model.
That said, why are we still using a decades old domain specific language from 1979 (SQL) to interact with databases.
A language that has an extremely low impedance match with “data”, when we have a perfectly good language from two decades earlier… 1958 (LISP) that does a better job, and doesn’t require an ad-hoc query planner that trys (and fails) to outsmart the person planning the query.
Not only that but clearly someone didn’t get the memo that relational database models are so “1995”.
I applaud the efforts, and cynicism aside it really looks like they are doing their best here, and appreciate there is still some time to go before SQL falls away.
Unfortunately I can’t really like the fact that the authors are working this hard on what really is a dead paradigm.
Very well made paper however.
Obviously this comment comes off as authoritative, snarky and denigrating to the people who did the work.
That’s really not my intention however; it’s more like
“just sayin…”
Probably because it’s the least dead paradigm in the data world. Tools are useful to the extent that they can be employed to solve problems, and the first part of that is minimizing the number of new concepts the user must learn before they can solve problems.
Just to go further in agreement in relation to your comment.
Select * from Customers where Balance < 0
I just made that up so it probably isn’t valid SQL. It’s years since I wrote a lot of SQL.
Sure, that is easy and as you said, it helps people get going and solve problems.
But look what they just did. They learned a DSL that wasn’t needed. Transducers like Map/Filter/Reduce (or better) are much clearer for them and the machine.
Furthermore those translate to many other languages and compose much more easily.
I’m not convinced it is easier to learn SQL than just learning operators based on basic set theory.
Not only that, but extending SQL to graph databases, geospatial etc.
Sure, it can be done, and has been, but only at the cost of vendor specific lock-in or extremely obtuse syntax.
I used SQL a great deal back in the day. Before I “saw the light”. It’s just there are things that are equally expressive but compose much better and honestly not that much more difficult.
I think that the problem is a matter of how principles are introduced.
Oh you need to do data? You need to use SQL.
It doesn’t work like that.
Yes I get that and you are right of course.
When SQL started it really seemed quite good and worked awesomely.
Then it got extended - a lot.
It got extended to such a great extent that when I look at SQL now, I feel like the thousands of people who have spent so much time in learning it properly have been painted into a corner, and I really feel sorry for them.
I have good friends who are top notch SQL DBA’s but they can’t transfer their skills easily. They are upset and have been going through the five stages of grief for some time.
Data is not flat relational tables anymore, (you could argue it never was), and I really feel bad that they really did do “Computer Science” to a high level on a very obscure technology that is destined to the same fate as COBOL.
Obviously they get paid a lot. So there is that.
I think we should remember that the whole point is less about a particular language but rather about homoiconicity.
In systems where means for abstraction is the basis of themselves, abstraction becomes a no-op
My point is there is no need to stop with this at a mere computer language when we have every opportunity to go deeper
This seems like a hot topic, only the other day were we discussing AST editors, Intentional Programming and AOP.
I’d never heard the term “Design by Introspection” before, and that certainly may be a term that Andrei coined, but he certainly didn’t invent this, and he certainly didn’t invent static if, although he is owed much gratitude for taking it from its initial inception to the beautiful implementation in DLang. Nice to see C++ finally catching up btw, which is ironic because AFAIK Visual C++ 6.1 was the first compiler from a mainstream vendor to feature both static if and Design by Introspection, (well that and AspectJ).
As much as I love the meta-programming facilities in DLang, there is a difference to how Andrei approached it and a less tightly bound approach. In DLang, it’s hard for Design by Introspection to be done by someone other than the same programmer who wrote the regular code. That is to say there is a cross-cutting concern.
Another approach is that the allocator is chosen by external forces - enzymes that do deep introspection of the local or global AST.
What you do is open up the API to the compiler at the point just after the AST has been built and instead of hard-coding optimizer algorithms, you provide an open plug-in model where your “Intentional Programming” enzymes go and get busy on the code. These enzymes can be totally hidden and autonomous, or they can be exposed as optionally parameterized meta-tags that allow the application programmer to tweak the settings to guide the intentions.
In this way, things like optimizers are just a small class of things that can be written. You can just as easily have a rule that looks at your code in the middle of the compilation and sends a mail to your boss if you forgot to use Hungarian notation.
Now you have that, selecting allocators is left to people who know about selecting allocators, and you get to write
std::vector<SomeType> myBigFatVector;
or if you want to be explicit
[ fix_my_allocator ]
std::vector<SomeType> myBigFatVector;
or
[ meta::allocator::hint(meta::allocator::type::slab) ]
std::vector<SomeType> myBigFatVector;
… knowing that someone else, who knows a lot more about allocation strategies and the big picture has built something smart that does the right thing. these are examples not real code
More importantly, the policy that goes with the enzyme is not embedded in the code that rides with the implementation of either vector or a specific allocator, and that policy can be changed at the drop of a hat. Bye-bye cross-cutting concerns.
Anyway, there is a lot more to it than this, and I’ve barely touched on it here, but yes, “Design by Introspection” is extremely powerful and opens many doors where you only need provide your intention.
We had it to the point where I could mark up a class like so:
[ Window ]
class MyWindow
{
onMouseDown(auto mouseEvent) { ... }
};
That thing would add you window creation code, message handler code, wire up the message handlers, tell the linker to add additional resources and a whole lot more, working not only inside the language, but across the entire tool-chain.
We even had Bjarne and a bunch of the committee bought in, but politics and time constraints led to a less than adequate solution, that’s not to say it still isn’t worth pushing for, and honestly this technique has nothing to do with C++ or any particular language of course.
Now, back to the title “Abstracting is NOT about names”, well abstracting is about elevating the communication and implementation of concepts (i’m not talking C++ concepts here). The names of the abstractions are important, and so is the implementation of how the concepts get manifested, and to me they kind of go hand in hand.
That said, concepts are generally not their own implementors, but for implementations to come-a-running, it’s necessary to fully convey the idea and that means the aspects of their identity.. to which you can give a name.
Sorry for the long post - just my 2c
I like the article by the way, and I’m very happy to see this stuff being explored.
This is so wonderful.
It got me thinking though, CGL can be time-stepped with a pretty simple GPU program. But now I’m thinking, why stop at the rules of CGL and go the whole hog representing actual transistors, gates and other higher level things while you are at it using different texel colors and even multiple-layers with through-hole interconnects. Maybe people do this already, I have no idea.
In any case, what a tour de force - wow!
Far as matching features to objectives, those I know that were designed for sure are Ada, Wirth’s stuff (esp Oberon’s language & system), and Noether. Ada was done by systematically analyzing the needs of programmers plus how they screwed up. The language’s features were there to solve problems from that analysis. The Wirth languages aimed at the minimal amount of language features that could express a program and compile fast. Cardelli et al did a nice balancing job in Modula-3 for a language easy to analyze, easier to compile than C++, handles small stuff, and large stuff. Noether addresses design constraints more than any I’ve seen so far by listing many of them then balancing among them.
https://cr.yp.to/bib/1995/wirth.pdf
https://en.wikipedia.org/wiki/Modula-3
https://tahoe-lafs.org/~davidsarah/noether-friam4.pdf
I don’t know about Smalltalk. It had a nice design from what I’ve seen but I don’t know about the process that went into it. It could’ve been cleverly hacked together for all I know. Scheme’s and Standard ML’s languages seem to lean toward clean designs that try to balance underlying principles/foundations against practicality with some arbitrary stuff thrown in there. There’s also variants of imperative and functional languages designed specifically for easy verification in a theorem prover. They have to subset and/or superset them to achieve that.
Smalltalk was (and is!) very much a designed language, with strong vision and principles. Alan Kay has plenty to say about this, but the best source is Dan Ingalls: http://www.cs.virginia.edu/~evans/cs655/readings/smalltalk.html
It also dates from an era before the language/system divide, so is unfortunately misunderstood by most contemporary “language” people. Richard Gabriel has a good essay about this: http://www.dreamsongs.com/Files/Incommensurability.pdf
I love this part:
A way to find out if a language is working well is to see if programs look like they are doing what they are doing. If they are sprinkled with statements that relate to the management of storage, then their internal model is not well matched to that of humans.
Couldn’t agree more.
More generally: Programming languages are supposed to translate human concepts to machine concepts, in the most efficient way possible and without hand-holding.
Of course all current programming languages completely fail in this regard at the moment, but I believe we should still keep our eyes on this as the ultimate goal.
It’s very hard for programming languages to do this at present because we don’t have a clean way to express human concepts to machines. Current language syntax and grammar is a very poor channel to communicate these things, since we’re using machine level formalisms, not human formalisms as the foundation for design.
I believe if we do more research into how to express and channel human concepts, then future programming languages will have a much better chance at succeeding in this endeavor.
Also it’s very interesting how Alan Kay and Dan Ingalls thought back then. As @minimax pointed out, the essays were written “before the language/system divide”.
People really did think in much higher level ways regarding Human Computer Interaction back then. Somewhere along the line we forgot about philosophy and the human component. It would be nice to get back to that at some point.
We’ve been making great strides with NLP over the decades but NLP still doesn’t help us with “the bit in the middle”.
Nothing against Rust, but for example, I really don’t give a damn about the borrow checker, and nor should anyone. we shouldn’t have to
Also Go. See https://talks.golang.org/2012/splash.article
We are excited to continue experimenting with this new editing paradigm.
That’s fine, but this is not new.
Structured editors (also known as syntax-directed editors) have been around since at least the early 80s. I remember thinking in undergrad (nearly 20 years ago now) that structured editing would be awesome. When I got to grad school I started to poke around in the literature and there is a wealth of it. It didn’t catch on. So much so that by 1986 there were papers reviewing why they didn’t: On the Usefulness of Syntax Directed Editors (Lang, 1986).
By the 90s they were all but dead, except maybe in niche areas.
I have no problem with someone trying their hand at making such an editor. By all means, go ahead. Maybe it was a case of poor hardware or cultural issues. Who knows. But don’t tell me it’s new because it isn’t. And do yourself a favour and study why it failed before, lest you make the same mistakes.
Addendum: here’s something from 1971 describing such a system. User engineering principles for interactive systems (Hansen, 1971). I didn’t know about this one until today!
Our apologies, we were in no way claiming that syntax-directed editing is new. It obvious has a long and storied history. We only intended to describe as new our particular implementation of it. That article was intended for broad consumption. The vast majority of the users with whom we engage have no familiarity with the concepts of structured editing, so we wanted to lay them out plainly. We certainly have studied and drawn inspiration from many of the past and current attempts in this field, but thanks for those links. Looking forward to checking them out. We are heartened by the generally positive reception and feedback – the cloud era offers a lot of new avenues of exploration for syntax-directed editing.
This is an interesting relevant video: https://www.youtube.com/watch?v=tSnnfUj1XCQ
The major complaint about structured editing has always been a lack of flexibility in editing incomplete/invalid programs creating an uncomfortable point and click experience that is not as fluid and freestyle as text.
However that is not at all a case against structured editing. That is a case for making better structured editors.
That is not an insurmountable challenge and not a big enough problem to justify throwing away all the other benefits of structured editing.
Thanks for the link to the video. That’s stuff from Intentional Software, something spearheaded by Charles Simonyi(*). It’s been in development for years and was recently acquired by Microsoft. I don’t think they’ve ever released anything.
To be clear, I am not against structured editing. What I don’t like is calling it new, when it clearly isn’t. And the lack of acknowledgement of why things didn’t work before is also disheartening.
As for structured editing itself, I like it and I’ve tried it, and the only place I keep using it is with Lisp. I think it’s going to be one of those “worse is better” things: although it may be more “pure”, it won’t offer enough benefit over its cheaper – though more sloppy – counterpart.
(*) The video was made when he was still working on that stuff within Microsoft. It became a separate company shortly after, in 2002.
I mentioned this in the previous discussion about isomorf.
Here is what I consider an AST editor done about as right as can be done, in terms of “getting out of my way”
Friend of mine Rik Arends demoing his real-time WebGL system MakePad at AmsterdamJS this year
Right, so I’ve taken multiple stabs at research on this stuff in various forms over the years, everything from AST editors, to visual programming systems and AOP. I had a bit of an exchange with @akent about it offline.
I worked with Charles a bit at Microsoft and later at Intentional. I became interested in it since there is a hope for it to increase programmer productivity and correctness without sacrificing performance.
You are totally right though Geoff, the editor experience can be a bugger, and if you don’t get it right, your customers are going to feel frustrated, claustrophobic and walk away. That’s the way the Intentional Programming system felt way back when - very tedious. Hopefully they improved it a lot.
I attacked it from a different direction to Charles using markup in regular code. You would drop in meta-tags which were your “intentions” (using Charles’ terminology). The meta-tags were parameterized functions that ran on the AST in-place. They could reflect on the code around them or even globally, taking into account the normal programmer typed code, and then “insert magic here”.
Turned out it I basically reinvented a lot of the Aspect Oriented Programming work that Gregor Kiczales had done a few years earlier although I had no idea at the time. Interestingly Gregor was the co-founder of Intentional Software along with Charles.
Charles was more into the “one-representation-to-rule-them-all” thing though and for that the editor was of supreme importance. He basically wanted to do “Object Linking and Embedding”… but for code. That’s cool too.
There were many demos of the fact that you could view the source in different ways, but to be honest, I think that although this demoed really well, it wasn’t as useful (at least at the time) as everyone had hoped.
My stuff had its own challenges too. The programs were ultra powerful, but they were a bit of a black-box in the original system. They were capable of adding huge gobs of code that you literally couldn’t see in the editor. That made people feel queasy because unless you knew what these enzymes did, it was a bit too much voodoo. We did solve the debugging story if I remember correctly, but there were other problems with them - like the compositional aspects of them (which had no formalism).
I’m still very much into a lot of these ideas, and things can be done better now, so I’m not giving up on the field just yet.
Oh yeah, take a look at the Wolfram Language as well - another inspirational and somewhat related thing.
But yes, it’s sage advice to see why a lot of the attempts have failed at least to know what not to do again. And also agree, that’s not a reason not to try.
From the first article, fourth page:
The case of Lisp is interesting though because though this language has a well defined syntax with parenthesis (ignoring the problem of macro-characters), this syntax is too trivial to be more useful than the structuring of a text as a string of characters, and it does not reflect the semantics of the language. Lisp does have a better structured syntax, but it is hidden under the parenthesis.
KILL THE INFIDEL!!!
Jetbrains’ MPS is using a projectional editor. I am not sure if this is only really used in academia or if it is also used in industry. The mbeddr project is build on top of it. I remember using it and being very frustrated by the learning curve of the projectional editor.
Ummm, I’m old and crotchety, and one of my cats is sick, so take this with pinch of salt.
“SRE, Go Programmer, Mathematician.”
I beg of you, Software Engineers, Computer Scientists or anyone in the field, pleeeease don’t self-identify with one language. It breaks my heart. There are vast tracts of land to explore!
As far as the post though, it’s weird though, although I enjoy programming in loosely typed or dynamically typed languages occasionally, I’ve generally had just as much, if not more productivity in statically typed environments. Especially in these days of editors that fully understand the AST and type flow.
So, as far as increasing compile times… I don’t even need to compile very often these days - well not until I can already see that the thing compiles - the tools are that good.
Even without the fancy editors though, seeing really strong concrete types explicitly in the code is a wonderful form of documentation that is really beneficial to others that have to maintain things years down the line. Personally I only begrudgingly accepted auto’s and var’s in the last five years although I do have to concede they do make refactoring more convenient and save key presses.
All in all, it seems like he is really torn up and like he’s trying to convince himself or his boss one way or the other. I kind of feel for the guy actually :/
I beg of you, Software Engineers, Computer Scientists or anyone in the field, pleeeease don’t self-identify with one language. It breaks my heart. There are vast tracts of land to explore!
I’m still trying to convince our hiring people that advertising for “ruby programmer” or “Javascript programmer” is like advertising for “hammer user” instead of “carpenter”
So, as far as increasing compile times… I don’t even need to compile very often these days - well not until I can already see that the thing compiles - the tools are that good.
I think this is a fascinating point. Strongly typed languages, especially those that separate side effects, don’t really need to be fully compiled to reason about, at least in byte-sized pieces.
This is an excellent writeup on the benefits you see from having the application language be the application.
I’d also like to add a little to this. For some time, I have been working on a multi-user version of LISP scalable to tens of thousands of concurrent sessions on a single machine.
In this system, given sufficient privilege, any user or agent in the system may inspect, reflect or inject definitions in one, any or all of the other environments, whether they are connected and running or not.
As you can imagine, being able to do this from a simple REPL provides amazing power to you the application developer, but certainly comes with huge responsibility. When you push a definition out, it really is live in that moment.
The nice thing is, as this author says, you can play in your sandbox and keep micro-testing until you feel comfortable, then try giving the new definition to one user agent to see how it works before finally committing it to everyone.
That is the beauty of the REPL.
It still scares the pants off me though!
Immutability. (…) This means you’ll tend to program with values, not side-effects. As such, programming languages which make it practical to program with immutable data structures are more REPL-friendly.
Top-level definitions. Working at the REPL consists of (re-)defining data and behaviour globally.
These two points are in furious contradiction, since redefining top-level definitions is pretty much the ultimate side effect. Every other top-level function can see this “action at a distance”.
Let me be perfectly clear: ML and Haskell allow you to program using Lisp’s “every definition is subject to revision” style. Just stuff all your top-level definitions into mutable cells. The reason why it’s not done as frequently as in Lisp-land is because, perhaps, perhaps, this is actually a bad idea.
redefining top-level definitions is pretty much the ultimate side effect.
Funny, I’d consider the alternative, restarting a process, to be “the ultimate side effect”.
The reason why it’s not done as frequently as in Lisp-land is because, […]
Because defaults matter. Wrapping every definition with performUnsafeIO/readMVar would be prohibitive ceremony.
perhaps, perhaps, this is actually a bad idea.
Why? My experience is that it’s a really great idea.
Funny, I’d consider the alternative, restarting a process, to be “the ultimate side effect”.
How so? There’s no local reasoning about the old process being defeated, precisely because you have discarded the old process.
Wrapping every definition with performUnsafeIO/readMVar would be prohibitive ceremony.
It pales in comparison to the ceremony of re-proving a large chunk of your program correct, because a local syntactic change had global consequences on the program’s meaning.
Why? My experience is that it’s a really great idea.
Because… how do you guarantee anything useful about something that doesn’t have a stable meaning?
There’s no local reasoning about the old process being defeated
Does your program exist in isolation? Useful programs needs to deal with stateful services, such as a database. Why shouldn’t your compiler/language/runtime offer tools for dealing with the same set of problems?
It pales in comparison to the ceremony of re-proving a large chunk of your program correct
I don’t understand this point at all. If your “proof” is a type checker, then just run the type checker on the new code…
how do you guarantee anything useful about something that doesn’t have a stable meaning?
You don’t. You stop changing the meaning when you want to make guarantees about it. You don’t need all guarantees at all times, especially during development.
Again, consider stateful services. Do you run tests against the production database? Or do you use a fresh/empty database? If you need something to stand still, you can hold it still.
Useful programs needs to deal with stateful services, such as a database.
When they’re running, not when I’m writing them.
Why shouldn’t your compiler/language/runtime offer tools for dealing with the same set of problems?
It isn’t immediately clear to me what kind of problems you’re thinking of, that are best solved by arbitrarily tinkering with the state of a running process.
If your “proof” is a type checker, then just run the type checker on the new code…
It is not. Some things I prefer to prove by hand, since it takes less time.
You don’t need all guarantees at all times, especially during development.
It’s during development that I need those guarantees the most, since, after a program has been deployed, it’s too late to fix anything.
When they’re running, not when I’m writing them.
Your production system is always running. Is it not?
what kind of problems you’re thinking of, that are best solved by arbitrarily tinkering with the state of a running process
Who says it is “arbitrary”? I very thoughtfully decide when to change the state of my running processes. When you change one service out of 100 in a production environment, is that not “rebinding” a definition? Why should programming in the small be so different than programming in the large?
Have you ever worked on a UI with hot-loading? It’s so nice to re-render the view without having to re-navigate to where I was. Or to write custom code to snapshot and recover my state between program runs.
What about a game? What if I want to tweak a monster’s AI routine without having to find through a dozen other monsters to get to the exact situation I want to test? I should be able to change the monster’s behavior at runtime. Great idea to me.
Proof is useful, but it’s not everything, and it’s not even clear that it’s meaningfully harmed by having added dynamism. Instead of proving X
, you can prove X if static(Y)
.
This thread is a direct illustration of the incommensurability between the systems paradigm and the programming language paradigm, as described in Richard Gabriel’s “The Structure of a Programming Language Revolution” https://www.dreamsongs.com/Files/Incommensurability.pdf
I’d just like to chip in here with one word
“Facebook”
The Facebook “Process” is a great example of something that never needs restarting but rather features and services are changed or added to the application while it is running in front of our eyes - no page refresh required in the Web version at least.
I’d say that from a customer’s perspective this is a really good thing, and continuous end-user improvement is the long tail of continuous integration where nobody need do a reinstall ever again.
I realize that this is largely philosophical at this point, but we are starting to have the tools to make this possible in a more general setting.
For me as a user it’s a good thing I think, since I don’t need to lose context in huge point releases. Oh look, edit mesh just appeared on my toolbar. I wonder what that does?
So I guess I’m with Brandon on this one.
Immutability here refers to data and how the state is managed in the application. Since Clojure uses immutable data structures, majority of functions are pure and don’t rely on outside state. This makes it easy to reload any functions without worrying about your app getting in a bad state.
It’s a very worthy goal, and my hat is off to you and really anyone who gets into this area of research, personally I do think it is the future.
I’ve run similar experiments using the AST as the primary editing medium and there is so much to like about it.
We were trying it once back at Microsoft on the C++ AST, it was pretty neat… zero time compiles at least for debug builds since we could simply execute the AST itself. Pretty fun.
Charles Simonyi even left Microsoft to work on just this and created Intentional Software. He had a nifty editor that could render in a number of languages, buf it certainly wasnt cloud based or modern, but then it goes back a long time.
Another friend of mine, is making a project called MakePad. That is an absolutely gorgeous editor that works at the AST level aimed at teaching kids WebGl. Definitely worth checking out.
Personally I’m doing a lot of research in the area of extensible meta-circular semantic protocols and object models. This work ties in very nicely as well with these kinds of ideas.
I’m very interested and impressed with your project and look forward to seeing your progress.
Very nice work!
Thanks for the encouragement!
We see so much promise in this concept of editing, and the separation of concerns between storage/view/edit. The more time you spend on it, the more you realize that so much of what is baked into a language can just be considered sugar. It would seem we should be rigorous about what is defining logic and what is making the logic easier to read or write.
We welcome the comparison to Simonyi — his work has been suggested to us before. We definitely consider it auspicious to be standing on the shoulders of such giants. I think you are right that since his efforts, cloud infrastructure has really opened up many possibilities on this front.
Where we think we can really add to the picture is through the incorporation of more functional/purity/stateless concepts. Once you start paring down the AST because you have relegated much to sugar, you end up with a structure that is very amenable to analysis. Adding functional purity and referential transparency to this equation enables not just structural analysis but also behavioral analysis. We are excited at the idea that in real-time we could alert a user that he has written a function that is identical to however many more across the world, either through structural analysis or through empirical behavioral analysis. We think this could dramatically increase reuse and reduce redundant work.
The teaching angle is an interesting one. There definitely seems to be a lot of momentum toward visual coding aimed at teaching kids early. We’ve considered the possibility that our project could be applied as an instructional environment, letting students learn the concepts of coding without struggling with the command line and other integration/configuration headaches.
We are definitely looking for users and advisors at this stage, so we’d love to chat further and get more of your thoughts/feedback! We’d also be interested to learn more about your research, which sounds very applicable.
I think this will not succeed for the same reason that RSS feeds has not (or REST). The problem with “just providing the data” is that businesses don’t want to just be data services.
They want to advertise to you, watch what you’re doing, funnel you through their sales paths, etc. This is why banks have never ever (until recently, UK is slowly developing this) provided open APIs for viewing your bank statement.
This is why businesses LOVE apps and hate web sites, always bothering you to install their app. It’s like being in their office. When I click a link from the reddit app, it opens a temporary view of the link. When I’m done reading, it takes me back to the app. I remain engaged in their experience. On the web your business page is one click away from being forgotten. The desire to couple display mechanism with model is strong.
The UK government is an exception, they don’t gain monetary value from your visits. As a UK citizen and resident, I can say that their web site is a fantastically lucid and refreshing experience. That’s because their goal is, above all, to inform you. They don’t need to “funnel” me to pay my taxes, because I have to do that by law anyway. It’s like reading Wikipedia.
I would love web services to all provide a semantic interface with automatically understandable schemas. (And also terminal applications, for that matter). But I can’t see it happening until a radical new business model is developed.
This has happened in all EU/EEA countries after the Payment Services Directive was updated in 2016 (PSD2). It went into effect in September 2019, as far as I remember. It’s been great to see how this open banking has made it possible for new companies to create apps that can e.g. gather your account details across different banks instead of having to rely on the banks’ own (often terrible) apps.
The problem with PSD2 to my knowledge is that it forces banks to create an API an open access for Account Information Service Providers and Payment Initiation Services Providers, but not an API to you, the customer. So this seems to be a regulation that opens up your bank account to other companies (if you want), but not to the one person who should get API access. Registration as such a provider costs quite some money (I think 5 digits of Euros), so it’s not really an option to register yourself as a provider.
In Germany, we already seem to have lots of Apps for management of multiple bank accounts, because a protocol called HBCI seems to be common for access to your own account. But now people who use this are afraid that banks could stop this service when they implement PSD2 APIs. And then multi-account banking would only become possible through third-party services - who probably live from collecting and selling your data.
Sorry if something is wrong. I do not use HBCI, but that’s what I heard from other people.
I work on Open Banking APIs for a UK credit card provider.
A large reason I see that the data isn’t made directly available to the customer is because if the customer were to accidentally leak / lose their own data, the provider (HSBC, Barclays etc) would be liable, not you. That means lots of hefty fines.
You’d also likely be touching some PCI data, so you’d need to be cleared / set up to handle that safely (or having some way to filter it before you received it).
Also, it requires a fair bit of extra setup and the use of certificate-based authentication (MTLS + signing request objects) means that as it currently sits you’d be need one of those, which aren’t cheap as they’re all EV certs.
Its a shame, because the customer should get their data. But you may be able to work with intermediaries that may provide an interface for that data, who can do the hard work for you, ie https://www.openwrks.com/
(originally posted at https://www.jvt.me/mf2/2019/12/7o91a/)
Yes, this does seem like a naive view of why the web is what it is. It’s not always about content and data. For a government, this makes sense. They don’t need to track you or view your other browsing habits in order to offer you something else they’re selling. Other entities do not have the incentive to make their data easier to access or more widely available.
That’s very business centric view of the web, there’s a lot more to the internet than businesses peddling things to you. As an example, take a look at the ecosystem around ActivityPub. There are millions of users using services lile Mastodon, Pleroma, Pixelfed, PeetTube, and so on. All of them rely on being able to share data with one another to create a federation. All these projects directly benefit from exposing the data because the overall community grows, and it’s a cooperative effort as opposed to a competitive one.
It’s a realistic view of the web. Sure, people who are generating things like blogs or tweets may want to share their content without monetizing you, but it’s not going to fundamentally change a business like a bank. What incentive is there for a bank to make their APIs open to you? Or an advertiser? Or a magazine? Or literally any business?
There’s nothing stopping these other avenues (like the peer-based services you are referring to) from trying to be as open as possible, but it doesn’t mean the mainstream businesses are ever going to follow suit.
I think it’s also noteworthy that there is very little interesting content on any of those distributed systems, which is why so many people end up going back to Twitter, Instagram, etc.
My point is that I don’t see business as the primary value of the internet. I think there’s far more value in the internet providing a communication platform for regular people to connect, and that doesn’t need to be commercialized in any way. Businesses are just one niche, and it gets disproportionate focus in my opinion.
Aye, currently there is little motivation for companies to share data outside silos
That mind-set isn’t really sustainable in the long term though as it limits opportunity. Data likes to date and there are huge opportunities once that becomes possible.
The business models to make that worth pursuing are being worked on at high levels.
Ruben Verborgh, one of the folks behind the Solid initiative 1, has a pretty good essay 2 that details a world in which storage providers compete to provide storage, and application providers compete on offering different views to data that you already own.
Without getting into Solid any more in this post, I will say that there are a ton of websites run by governments, non-profits, personal blogs, or other situations where semantically available data would be a huge boon. I was looking through a page of NSA funded research groups the other day for McMurdo station 3, and finding what each professor researched on took several mouse clicks per professor. If this data was available semantically, a simple query would be enough to list the areas of research of every group and every professor.
One can think of a world where brick-and-mortar businesses serve their data semantically on their website, and aggregators (such as Google Maps, Yelp, and TripAdvisor) can aggregate them, and enable others to use the data for these businesses without creating their own scrapers or asking a business to create their own API. Think about a world where government agencies and bureaucracies publish data and documents in an easy to query manner. Yes, the world of web applications is hard to bring the semantic web to due to existing incentives for keeping data siloed, but there are many applications today that could be tagged semantically but aren’t.
The web has been always used mostly for fluff since day 1, and “web assembly” is going to make it more bloated, like the old browser-side java.
The world needs user-centric alternatives once again.