1. 3

    So, please forgive my ignorance but reading all the negative responses here - isn’t the fact that we now have a protocol standard for distributed social media an all around good thing?

    1. 9

      The lack of standards has never been an issue – the lack of deployments, independent implementations, momentum and actual interoperability has always been an issue.

      I remember implementing OStatus back in 2012 or so at Flattr, only to find that no client actually implemented the spec well enough to be interoperable with us and that people rather than spending time on trying to fix that instead wanted to convert all standards from XML to JSON, where some like Pubsubhubbub/WebSub took longer to be convert than others, leaving the entire emergent ecosystem in limbo. And later ActivityStreams converted yet again, from JSON to JSON-LD, but then I had moved on to the IndieWeb.

      I find the IndieWeb:s approach to document patterns, find common solutions, standardize such common solutions as small focused, composable standards, and reusing existing web technology as far as possible much more appealing.

      One highlight with that is that one can compose such services in a way where ones main site is even a static site (mine is a Jekyll site for example) but still use interactive components like WebMentions and Micropub.

      Another highlight is that one as a developer can focus ones time on building a really good service for one of those standards and use the rest of them from the community. That way I have for example provided a hosted WebMention endpoint for users during the last 4 years without me having to keep updated with every other apec outside of that space, and the same I’m doing now with a Micropub endpoint.

      Composability and building on existing web technologies also somewhat motivates the entire “lets convert from XML to JSON” trend – HTML is HTML and will stay HTML, so we can focus on building stuff, gaining momentum and critical mass and not just convert our implementations from one standard to the next while fragmenting the entire ecosystem in the process. That also means that standards can evolve progressively and that one can approach decentralized social networks as being a layer that progressively enhances ones blog/personal site one service at a time. Maybe first WebMention receiving? Then sending? Then perhaps some Micropub, WebSub or some Microformats markup? Your choice, it all doesn’t have to happen in a day, it can happen over a year, and that’s just okay. Fits well into an open source scene that wants to promote plurality of participants as well as implementations while also wanting to promote a good work/life balance.

      1. 1

        Unfortunately every time an ActivityPub thread makes it to a news aggregator like this, it always seems like there are some negative comments in the feed from some folks from the indieweb community. It kind of bums me out… part of the goal of the Social Working Group was to try to bridge the historical divide between linked data communities and the indieweb community. While I think we had some success at that within the Social Working Group, clearly divisions remain outside it. Bummer. :(

        1. 1

          Sorry for the negativity – it would help if posts like these presented the larger context so that people doesn’t interpret it as if “ActivityPub has won” which as you say isn’t at all the case, but which this thread here has shown that it can certainly be interpreted as and which the title of this submission also actually implies.

          This gets even more important with the huge popularity of Mastodon as that’s a name many has heard and which they might think is the entirety of the work in that working group, which isn’t the case and is something that everyone has a responsibility in adequately portraying.

          So sorry for the negativity, but it’s great that we both feel that it’s important to portray the entirety of the work of that group!

      1. 6

        A lot of DRM arguments have been around slippery slope arguments like this, but I don’t feel like it gives the current context enough weight.

        DRM on the web exists already, people install plugins to watch streaming services. This is giving a way to avoid having to install arbitrary plugins, instead boxing things into a bit of a safer environment.

        Meanwhile , no one seems to be clamoring for hiding their CSS stylesheets , mainly because no tech company is under contractual obligations from Hollywood to do so.

        1. 12

          A lot of DRM arguments have been around slippery slope arguments like this, but I don’t feel like it gives the current context enough weight.

          We’ve been sliding down that slippery slope for about two decades now. And it has been getting considerably worse. In the early 2000s we saw people outraged about the first appearances of DRM’ed music and movies. Nowadays people seem to accept it, and are even moving towards accepting even more dramatically absurd forms of DRM from coffee makers to cars.

          And every time someone complains about DRM someone says “well yeah but people are okay with this one step that’s not as bad, what’s so bad about this one step worse”, which is exactly how the slippery slope works.

          1. 4

            My recollection of things is that the DRM situation has gotten better, not worse, over the past 15 years. Not for everything but for a lot of things

            It used to be that when you bought music from an online store, you had a DRM mess. Streaming video had to go through weird windows Media player DRM, which ended up being a whole virus vector.

            Nowadays I don’t ever see DRM’d MP3s, and video tends to work relatively sanely. Lots of games still have anti piracy stuff, but most companies just opt for some Steam DRM . I remember rootkit DRM.

            Granted it’s not always forward progress. But it’s felt more effective for my personal situation

            1. 3

              My recollection is that DRM itself has gotten better, and more pervasive. It’s still there, it just happens to work without getting in your face or breaking your system, so people accept it.

              I’ll reserve judgement on whether that is a good thing.

              1. 3

                It was really cool that we won the DRM’d MP3 battle, but I think businesses don’t even care because they can DRM up their music easily now because everyone uses streaming services.

                1. 1

                  Other than, say, the iTunes store (which sells high quality recordings without drm).

                  No major providers though, right?

                  1. 1

                    From what I understand, Google Play Music gives you the option to download music you buy DRM-free a total of 3 times. However, you don’t have any rights to the music which you save for offline play, but don’t buy, in the Play Music app.

                2. 1

                  I agree. Things improved a lot on content availability. Whereas, the UEFI and App Stores were a step back on the technical side.

              2. 3

                A lot of providers don’t do that, though. Whereas, a default DRM in the browser would probably make the number go up since the worst part is already there.

                1. 1

                  This is giving a way to avoid having to install arbitrary plugins, instead boxing things into a bit of a safer environment.

                  Do you have to install any plugins for Netflix? EME already works without being in the standard, it’s supported by all major browsers.

                  1. 1

                    Just because malware already exists and will continue to exist doesn’t mean we should make things easier for malware authors.

                    The correct response to DRM plugins is exactly the opposite of what you said: browser vendors should constantly break DRM plugins by changing the unofficial APIs those plugins use.

                  1. 4

                    This stuff again.

                    I fully expect with EME that we will see application authors begin to lock down HTML, CSS, Javascript, and every other bit of their web applications down with DRM.

                    It’s literally called “Encrypted Media Extensions”. It’s directly tied into HTMLMediaElement, and the whole point is that encrypted video frames get passed through HDCP and decrypted on your display.

                    It WILL be contained to movies. Because it’s IMPOSSIBLE to use for anything else.

                    Look at the prevelance of DRM in proprietary applications elsewhere

                    What prevalence? Professional applications like, say, AutoCAD still use a simple serial number, and every new release gets cracked on day one. Some games use DRM, but often just drop it after it’s been cracked.

                    Anyway, here’s a real actual threat to the free open web where you can view the source of everything. It’s called… proprietary code on the server side, and it’s been there since forever.

                    1. 13

                      So first off, I was talking more about the endorsement of DRM for images/video/audio will open the floodgates for DRM’ing of other technology. Whether or not it uses EME isn’t the point as much as the ok of DRM for the web from the W3C.

                      Second, while EME provides interfaces directly into HTMLMediaElement, the payload delivery mechanism seems like a reasonably generic DRM’ed message bus, and it isn’t hard to see how it could be used as a foundation to deliver other DRM’ed content. Am I wrong that interfaces couldn’t be exposed to use EME for other things as well?

                      1. 12

                        The strategy is called Fate Accompli where they break a larger goal into smaller ones that seem individually justifiable. Companies such as Microsoft have used devious techniques like that many times. The Trusted Computing Group was a good example where they told the masses TPM-like stuff was about security where it was mostly about DRM. So, there’s plenty of precedent for anything aiding DRM to be a stepping stone to much worse things.

                        1. 9

                          It’s fait accompli. <3

                          1. 1

                            Funny thing is I originally wrote that but though I misremembered spelling. It seems I did but only when I “fixed” it. Haha.

                          2. 4

                            TPMs are about security. And some of the most amazing TPM usage comes from Free Software. Check out tpmtotp and its usage in Heads.

                            Modern movie DRM uses HDCP — passing encrypted video frames to your monitor to be decrypted there.

                            1. 5

                              TPM’s were a product of the Trusted Computing Group that involved a number of monopolist, defense contractors pursuing their goals. The security claim, done for NSA’s IAD, was that the device could supplement a security-enhanced endpoint such as General Dynamics TVE or Dell Secure Consolidated Solution by protecting the boot process or any pre-OS software such as disk encryption. It was also pushed by entertainment industry asking Microsoft et al to make it technically impossible for users to view content without authorization. In other words, copyright monopolists would partly dictate what runs on our computers to suck more money out of us. They had already bribed politicians for DMCA for legal part. Now they just needed the technical part.

                              Let me illustrate what it was conceived to do to let you decide if it was more about security or companies’ profits (esp DRM & lockin):

                              1. The TPM ensures secure boot of BIOS’s made by (two?) companies that kept their products insecure on purpose for extra profit. These companies are an oligopoly with OEM deals that try to shut out competition. Initially, only their products will be signed as “trusted.”

                              2. The next, major part is an OS designed by monopolists who kept their product insecure on purpose for extra profits. This company was battling free software everywhere it could. Initially, only its OS would be signed for x86 systems as the “trusted OS.”

                              3. The OS then loads apps from various companies, esp Microsoft, that are deliberately left insecure to keep profits high. If it’s an app for movies or music, peripheral projects will force it to use a “protected media path” to ensure nobody can record it. Proposals of the time also included using virtualization or separation kernels to run media player outside the OS so no user software could touch it at all. Microsoft begins implementing whatever was cheapest/easiest.

                              So, it looks like a board-level, whitelisting solution designed by monopolistic and oligopolistic companies to force users to either use their DRM-laden, expensive software or switch to “insecure boot” modes with no protection at all. Your example of HDCP is one of many forms they had planned that were mostly closed-door discussions but slipped to public in various ways. Those slips led to a big backlast plus campaigns against them on DRM and user control side. We succeeded in forcing them to back way down from original goals.

                              The resulting chip barely does anything since it was designed to be dirt cheap above all else per what a member of Steering Committee told me. He said limiting it to weak form of trusted boot in special-purpose ASIC was only way to get Intel & desktop vendors to go along. Nonetheless,lots of CompSci and FOSS work built interesting stuff on it with the commercial sector moving first on that. Most of the better teams doing R&D have switched focus to TrustZone now given how mobile is still laying groundwork for how it does security. Lots of prestige, maybe profit, to be had if Apple or Samsung picks up a team’s solution. TPM-related schemes continue to get investment, though.

                              Far as the projects you bring up, they’re both really cool. I’ve bookmarked them for future evaluation or use. :)

                        2. 0

                          WebAssembly, on the other hand, is a legitimate threat to View Source. I think the OP is paying attention to the wrong W3C working group…

                          1. 7

                            No, it’s not. It doesn’t do anything new. It was always possible to compile native code to JS. (Or manually write “low level” JS that used one TypedArray of integers for all of its memory, LOL.) Wasm is a performance optimization, like asm.js was, but now with an efficient binary representation instead of messy JS code annotated with | 0 (or whatever it was) everywhere. Devtools could show the decoded source tree – that’s better view source than asm.js code.

                            1. 4

                              WebAssembly is just faster asm.js which is just faster compiled JS. That problem has existed long before WebAssembly has.

                              1. 1

                                But asm.js was not the topic of a W3C working group…WebAssembly is. We’re talking about being outraged because the W3C endorses an idea.

                              2. 2

                                WASM is just a way to encode JS into bytecode in a form that is just more handy in terms of encoding, decoding and compilation. It may even translate verbatim into JS.

                            1. 3

                              I did a bit of work on the OStatus stack that Mastadon currently uses. There’s definitely room for improvement, but I think it’s better to get there through incremental changes to functionality and composing protocols. Having one all-encompassing spec locks you into a single set of use-cases, which hinders growth and adoption long-term.

                              1. 3

                                ActivityPub’s main design, as you know I think, was done by Evan Prodromou who did most of the design on OStatus. ActivityPub was written, with the initial design also by Evan, to try to overcome some of those limitations.

                                Meanwhile Mastodon did try to incrementally improve OStatus by adding extensions, but that upset people as well because they were deemed as incompatible with the rest of the fediverse (privacy isn’t easy to add-on after the fact in OStatus for one). Now that ActivityPub is moving from OStatus to ActivityPub there’s complaints from much of that same group (not saying that encompasses you)… catch-22…

                                BTW, heya Brett! Remember a very naive young programmer helping with a command line frontend in Python briefly to one of your projects at the Goog back in the day with bgoudie and friends for like… a month? That was me. :) I’ve meant to catch up with you for unrelated reasons, mainly because of some exploration of actor model stuff since then… watch the video on: https://www.gnu.org/software/8sync/

                                1. 3

                                  Meanwhile Mastodon did try to incrementally improve OStatus by adding extensions, but that upset people as well because they were deemed as incompatible with the rest of the fediverse (privacy isn’t easy to add-on after the fact in OStatus for one). Now that ActivityPub is moving from OStatus to ActivityPub there’s complaints from much of that same group (not saying that encompasses you)… catch-22…

                                  This is not true. Privacy on the level of AP would have been very easy to add, by just using a different salmon endpoint for private messages. This was discussed at length back then, but Mastodon still chose to implement the leaky-by-default changes. The complaints about the move to AP is because Mastodon breaks old ostatus functionality while doing it, but that’s a whole different topic.

                                  1. 1

                                    Maybe this is true, though I never saw a concrete proposal of how to do it or implementation efforts to show how it could be done? So it still seems theoretical to me. Do you have a link to where the proposed approach was laid out / outlined?

                                  2. 3

                                    Oh additionally, if you want a more minimal system that isn’t as “all in one” as ActivityPub is, Linked Data Notifications uses the same inbox endpoint and basic delivery mechanism that ActivityPub does, with a lot less of the social networking structure.

                                    1. 2

                                      Hey good to hear from you! Do you have a link to the part about “upset people as well because they were deemed as incompatible with the rest of the fediverse”? I’ve been out of the loop for a while but I’d be curious to see that.

                                      1. 1

                                        It’s kind of hard to find a good summary, but this blogpost talks about it. Basically since there was no nice way to add privacy features to the existing distribution mechanisms, Mastodon kind of tacked it on and would advise the next server as to its privacy level. This lead to complaints that Mastodon was implementing “advisory privacy” since you’d send what was theoretically a private post from a Mastodon server, but everyone on a GNU Social (that’s the new name for StatusNet) server would see it. It could be that there was a way to do it in OStatus, but it wasn’t really worked out.

                                        One major thing that ActivityPub added is email-style addressing… every post is delivered to an individual’s inbox. Of course, like in email, you’re trusting the receiving server to actually do the right thing (and thus you could accuse this of being “advisory privacy” as well, but anything that isn’t end to end encryption can be accused of that), but I don’t get other peoples’ emails in my inbox because the addressing is baked in to the standard so it’s expected that all servers implement that.

                                    2. 0

                                      Yeah. OStatus is very well done, a nice unity of existing technologies that have been proven to actually work.

                                    1. 3

                                      Here’s the documentation on the new (ice-9 sandbox) module. It includes a pretty great quote:

                                      Sometimes you would like to evaluate code that comes from an untrusted party. The safest way to do this is to buy a new computer, evaluate the code on that computer, then throw the machine away. However if you are unwilling to take this simple approach, Guile does include a limited “sandbox” facility that can allow untrusted code to be evaluated with some confidence.

                                      1. 1

                                        Physical separation as default was in a comment I just posted:

                                        https://lobste.rs/s/8fdigq/computer_security_safe_sex/comments/qbv0mi#c_qbv0mi

                                        Unpopular but safest option. Wise of them to say that albeit it looks like a joke, too. A modification of that idea that goes way back is to use ROM’s for all firmware w/ removable storage. Then, the most they can do is damage the hardware (DOS attack). Their changes go away when you reboot the machine. Might have to do a custom job for that these days unless you’re fine with embedded boards. Some of them still have ROM in combination with flash that can store a signed image.

                                      1. 12

                                        https://lists.gnu.org/archive/html/emacs-devel/2016-12/msg00387.html

                                        “the byte stack implementation relies on using pointers to freed storage”

                                        Wow.

                                        1. 9

                                          Wait, so they were relying on undefined behavior in the C standard that just happened to work on their target platforms? Geez. This is exactly the sort of stuff one shouldn’t be doing in C.

                                          1. 7

                                            It seems worse than that. It sounds like this byte stack thingy was removed because of this dangling pointer issue, but then readded for some reason to get concurrency working.

                                            I don’t know the details so I’ll refrain from judging the matter. But code using pointers like this usually ends up with a CVE number assigned to it. Big red flag.

                                            1. 10

                                              I don’t know the details so I’ll refrain from judging the matter. But code using pointers like this usually ends up with a CVE number assigned to it. Big red flag.

                                              Using dangling pointers is Not Good, without question, but I don’t think it’s likely much of a security issue in this case simply because Emacs makes no attempt to sandbox elisp code – any exploit you could write using this pointer could almost certainly be written just as easily in straight emacs lisp, which can touch anything on the host it wants with the editor’s privileges.

                                              1. 1

                                                Pardon my ignorance, but is code the only thing that’s at risk here? I sift through tons of data in Emacs. Could data be used in some way to create an exploit? A nastily crafted email perchance? Because if that’s the case, that seems like a concern.

                                                1. 2

                                                  No, to exploit this would require running elisp.

                                                  1. 1

                                                    Ah okay, in that case no worries! :)

                                            2. 4

                                              Keep in mind that there are a bunch of perfectly reasonable implementation techniques for interpreters that are undefined behavior when written in C. Things like “I’m going to use the bottom four bits of pointers as a tag. If it’s 0, it’s actually a 60-bit integer, 1 is a heap pointer, etc.” *(val*)((uint8_t*)pointer-1) is, I’m fairly sure, undefined, but no C compiler is going to break it because it’s the job of a C compiler to be practical, not just a strict interpretation of the C standard.

                                              So while in this example it sounds like they’re doing something silly that should be fixed, in general strict C standard conformance is a non-goal of something like Emacs.

                                              1. 2

                                                Alignment isn’t really undefined (though you could probably make the argument that it’s architecture dependent - I’ve only fiddled with alignment on x86). If you control how an initial chunk of malloced memory is aligned, you can guarantee alignment throughout a program. A pointer is just a value then - no undefinedness there - it’s just pointing to the wrong part of the data if the tag isn’t removed.

                                                1. 1

                                                  I’m not an expert on the C standard, but I think the issue is to do with aliasing and misaligned conversions; see e.g. http://stackoverflow.com/a/28895321/499609