I may have to go to court to force a company to give me access back, but it is possible.
See? You live in a proper state with a functioning law system. There are valid reasons why sometimes you do not want “code is law”. If sued, even giants like Google all of a sudden start listening to you. And that’s why it’s good that the law system is run by humans working with oldfashioned paper letters.
Re GDPR, Matrix shouldn’t be directly compared to IRC or XMPP but email. A matrix home server is kindof like an IMAP server. Once the message has been sent out to recipients, they have their own copies.
Some of these notes are directed at matrix the protocol, some at synapse the implementation. Many solutions are on the roadmap but not being worked on yet.
Here’s an evaluation of Matrix vs. the GDPR by an actual lawyer (German): https://www.cr-online.de/blog/2022/06/02/ein-fehler-in-der-matrix/ – I was unsure if this is on-topic on lobste.rs, so I refrained from posting it as a story, but it does fit in here. Feel free to submit it as a story if you want. The article specifically addresses the e-mail comparison point.
tl;dr: It’s not compliant.
I still think the comparison is valid in some senses, though — it’s reasonable to want your instant messages to not live forever in the same way that emails do. (Of course, from a legal standpoint, you might have to use the email comparison to get around GDPR, which is a different thing.)
Eh, well, it’s also reasonable to be able to search your history to find that thing from 4 years ago that you suddenly remembered…
Isn’t this something that LuaLaTeX was meant to help with? Being able to write in a more concise way than TeX, i.e., Lua, sounds as if it would make such tasks simpler.
The article makes it appear as if this was kind of an official requirement of the Commission for all EU institutions. This is not the case. Take a look at the actual press release by the responsible EDPS. It’s 1) in a pilot phase and 2) use of it by EU institutions is entirely voluntary. There appears to be an unmentioned number of institutions participating in the pilot phase, which however, as far as a quick look on the Mastodon instance reveals, does include at least two prominent members, the Commission itself and the CJEU. The EDPS itself seems not to participate.
I appreciate the news, but it looks shortened to me.
You should almost never disable SELinux fully: when disabled, proper labels will not be applied to any changes to the filesystem, resulting in massive pain.
If you’re running into problems, set it to permissive mode instead, which keeps it active but not enforcing the policies.
That’s actually what the article recommends:
You can try to run the magical commands to create a local policy patch, but then you might run into problems again further down the line. The best option in this situation is to, at least temporarily, put SELinux into permissive mode. Permissive mode logs denials but doesn’t enforce them the way the regular enforcing mode does. […] The thing I’ve started to realize over time is that it’s probably best to just leave SELinux in permissive mode. The enforcing mode might prevent malicious actors from taking advantage of a weak point in my system’s security. However, it’s definitely preventing me from getting my work done.
There was this entry a few years ago about using π as a storage device. It does contain all data that could exist. Can’t find the link right now.
Here’s a fun idea for a weekend project: A javascript library that implementing a tag that displays a fixed size image based on a π offset passed as a parameter. A script like the one in this post could be used to find the image. Perhaps even allowing for a certain error level for performance reasons.
Can’t find the link right now.
Is it https://github.com/philipl/pifs by any chance?
This is pure gold. Don’t forget to look at the issue tracker. There are valuable gems like this one:
Thinking out loud. We may be able to get to GDPR compliance by rounding Pi down to 3. Yes, we’ll lose some data but we’re really down to the wire here.
LOL, this is absolutely what my company did when GDPR hit.
How long till the authorities find child pornography (or Critical Race Theory) in pifs and get π shut down, or possibly censored to an innocuous value like 22/7? That could break everything that depends on π, like for example wheels…
I’ve never known anyone who uses hotkeys for styling in Word. I’ve tried to learn them and they’re too inconsistent. None of these programs are designed with hotkeys in mind.
Cmd-I for italics or Cmd-B for bold should work in every program dealing with text. Please don’t tell me they don’t work in Word (which I’ve rarely used).
That is true if, and only if, your Office is in English.
In Portuguese, I am sure Ctrl+N is the keystroke for making it bold (negrito in Portuguese).
Therefore, @theblacklounge has a point: the keystrokes aren’t consistent in Word, and not consistent among same version in different idioms.
In Portuguese, I am sure Ctrl+N is the keystroke for making it bold (negrito in Portuguese).
I would be surprised as it would surely conflict with the new document shortcut.
You would think that, but that is solved in a very elegant manner! I can’t speak for Portuguese, but in Norwegian ctrl + F is not search, it is bold because of the Norwegian word for bold is “fet”. So how do you search? Ctrl + B of course. That key isn’t used for anything.
It wouldn’t surprise me that Ctrl + F is new file in Portuguese Office…
I think you’re abusing the phrase “if and only if”. Pretty sure last time I used Office in Hebrew, the keyboard shortcuts were all the same.
That’s not an iff with two f’s — I have never used Office in English.
You are correct that Ctrl+N makes bold text in Microsoft Office on Windows in Portuguese, but that is obviously something you managed to learn, with a simple mnemonic. Compared to learning vim, it seems fairly manageable.
That is true if, and only if, your Office is in English.
Yes. I need to use Word at work. In German Word, it’s CTLR+F for making text bold (as the German word for “bold“ is “fett”). Since for office documents sent to me I use LibreOffice at home (which does use the familiar CTRL+B regardless of locale) I am eternally suffering by Word’s special way to handle this.
You’re not supposed to just bold text, so that’s a pittance. They’ve put so much work in the semantic styles but only the first 3 styles have their own hotkeys, and my European keyboard breaks one of them.
If your output is HTML+CSS, you can manually add styling information to Markdown.
To me, the ability to seamlessly “escape” into HTML is a big draw of Markdown. I despise its table syntax so I generally just use HTML tables in the rare cases I need them.
The HTML “escape” makes Markdown internally inconsistent (as if it wasn’t already due to lack of standardisation) and one of the most complex formats invented, right next to MS Word.
There are 2 kinds of programming languages text markups: the ones everyone complains about, and the ones no-one uses.
Very well. But in case anyone’s curious, here’s the text-markup-no-one-uses that I’ve got my eyes on: DocBook.
(Content warning: good stuff, but also written by Eric Raymond, whom you may find disagreeable. Shoot the proverbial messenger, though, not the message.)
DocBook looks like XML to me. If I were to accept XML, I’d rather use a tool to generate conformant XHTML and use CSS to get different outputs.
The point of MD (and similar, like reStructured Text, Textile, Asciidoc etc) is to avoid the tagsoup of {SG,H,X}ML when actually writing content.
I actually think (La)TeX is the sweetspot here. It has markup, sure, but it’s much less intrusive when writing than tag-based markup. I.e. you write \emph{text}
instead of <em>text</em>
.
According to Wikipedia, DocBook is still being supported by the OASIS group.
The point of MD (and similar, like reStructured Text, Textile, Asciidoc etc) is to avoid the tagsoup of {SG,H,X}ML when actually writing content.
Asciidoc is semantically equivalent to DocBook. It is a plaintext representation of DocBook without the tag soup. With Asciidoc driving a DocBook toolchain, you can have all of the output formats supported by DocBook for free. As someone who prefers reading documentation in info mode under emacs, I really appreciate that.
I’ve advocated for it in a situation where markdown wasn’t capable enough, and I’d do so again.
LaTeX is a nice nesting markup syntax, for sure. However, at its core, it’s still an imperative language—with pervasive use of global variables, to boot.
True, but I’ve always had an issue with the fact that Markdown embeds HTML in a way that doesn’t nest. Once you’re inside a single HTML element, no more nice link and emphasis syntax for you. On top of that, this can also date an individual document to the trends in play when it was written, making it a poor document format in the long term.
This connection also limits Markdown to a format that must be converted to HTML before anything else. But I guess that matches its most common use case, even if it also makes it significantly less than ideal for a document authoring format.
Making a title is easier than bolding in md. You can’t fake a title with just style cause you can’t set font size etc.
I just fired up Word (365, Windows 10).
I could select a word, navigate to the “subtle emphasis” section, and apply it, all using the keyboard. When you hit ‘Alt’, a bunch of letter appear in the ribbon that allows you to access the different parts of the ribbon and select them.
Is it as easy as “Cmd-I”? No, but it’s more discoverable than Emacs…
Word came up with semantic styles way before the semantic web was established. It’s just a very good invention. Bolding a word to highlight it is fine, but way too often people use it for titles, and then it’s not just bolding but bigger size and different font too. This is not uncommon in offices. Decades of human labor have been lost to “oh I want all h2 headers one size smaller”, and mistakes are easy so it ends up ugly.
Semantic headings are also important for accessibility. Using screen reader commands to jump to elements matching particular criteria, such as headings or even headings at a specific level, is the closest that a blind person can get to skimming.
This is defeatist thinking about screen reader technology.
A sighted person does not have semantic headings to help them skim. They just have text size and weight.
Today, we have to provide screen readers with semantic hinting in order to enable skimming (and this helps the content creator, as well, through styling etc, so I’m not really arguing against it). But there’s no reason a BETTER screenreader can’t deduce these semantics from rendered font sizes.
You can’t hope to fix a problem by expecting a billion content creators to do the right thing. We have to fix it by enabling tools to do the right thing with the wrong input.
I get your point. In fact, my company provides a tool that tries to automatically make documents more accessible, using the best available heuristics for things like headings and tables, as well as OCR. For the many existing documents out there, especially PDF documents, this is the best we can do. And of course, there will continue to be new inaccessible documents as well. But heuristics are fundamentally unreliable, so we should encourage document authors to directly convey document structure, and make it as easy as possible for them to do so. We should take both approaches at once.
As mentioned in my infrastructure blog post, I have multiple networks (VLAN) at home. Because I didn’t want to do some unholy things, I needed to have a /64 per network, meaning multiple /64s for my home.
I don’t understand this part. Why can’t he split the network into multiple segments? What use case does anyone have for multiple /64 in their home? That’s 18446744073709551616 addresses per subnet.
But that’s… a giant amount of addresses. Why is this? Not allowing smaller sizes looks as if we’re repeating the IPv4 mistakes?
Because that’s the only functioning way we’ve been able to come up with for devices to be able to be able to automatically configure themselves with a predictable persistent address without any conflicts.
The issue is that people seem to have a hard time comprehending just how big of a number 2^128 is. With that address space we could for example assign 2^32 /64’s to each IPv4 address (of which there are 2^32). We can give the entire IPv4 address space to each IPv4 address.
Additionally, RIPE strongly discourages assigning prefixes longer than /56, and in general recommends assigning end-customers a /48 or /56, and that assigning a /48 to all customers is the most practical address plan.
Thanks for the explanation. Indeed these numbers are just too large to properly imagine them…
Additionally, RIPE strongly discourages assigning prefixes longer than /56, and in general recommends assigning end-customers a /48 or /56, and that assigning a /48 to all customers is the most practical address plan.
Well, at least in Germany consumer ISPs seem to hand out /64 by default, though. I suppose one can ask to get a /48 or /58, though.
Vodafone (previously Unitymedia) gives a /56 by default for IPv6-only cable, so it’s not uncommon.
Well, at least in Germany consumer ISPs seem to hand out /64 by default, though. I suppose one can ask to get a /48 or /58, though.
When have consumer ISPs ever been known to follow guidelines. ;)
Are you sure it’s the ISP specifically only giving a /64 and not the DHCP-PD client only taking a /64 out of the available /56?
Just checked it, from Telekom I get a /56 without any interaction. As far as I know as a consumer you can ask for a /48 and as a commercial customer you just get a /48. A few years ago there was a news about Telekom asking for a bigger prefix then a default ISP get, because they wanted to follow the RIPE guidelines. As the biggest ISP in Germany they could argue this. As far as I know most ISP in Germany does this similar.
I’m not sure how it is handled for mobile access. As far as I know you get default slaac in a provider managed /64 and can request prefixes per dhcpv6. I can’t check this, because I don’t have mobile Internet.
Thanks for the comment, I guess I should update my post to be more precise about what the problem is. As kyrias explained, it’s not the number of addresses, but to be able to use SLAAC.
The X clipboard is a hairy beast. First, note that there is not a single clipboard. X traditionally has two clipboards, PRIMARY and SECONDARY, the first of which is accessed by selecting something and pressing the middle mouse key to duplicate the selected text while it’s selected. It’s quite a nice feature, but widely unknown; I have come to like it quite a bit. I never came about a use for SECONDARY. The CTRL+C/CTRL+V keyboard is a not formally included in X11, but de-facto third clipboard, named fittingly CLIPBOARD in X atom terms. For notifications on clipboard access, you will need to monitor all three of them. PRIMARY and SECONDARY are managed by X.org itself. For CLIPBOARD it is typical to employ a helper programme called a “clipboard manager”, but it is by no means necessary. If however you don’t, closing the source application will make the clipboard content vanish, because there is no clipboard owner anymore in X’s terms.
I had my fair share with X’ clipboard when I wrote tinyclipboard, a minimal C library for interacting with the X and Win32 clipboard abstracting over the platform differences, many years ago (I don’t work on this anymore, use at your own risk). The source code might serve as inspiration for you, but I will give you the most useful resources I used when writing this below (the first one is particularyly insightful):
Take a look at tinyclipboard’s source code if you are interested in how things work on Win32 or how I implemented the above for X11.
I am certainly not qualified to judge whether this is indeed a case of plagiarism or not, but if someone like Mr. Bernstein finds it to be, I as a non-mathematician will accept that judgement. However, there is one thing that strikes me as a flawed reasoning. The post wants to make the point that an undiscovered patent was the reason for Google to stop the post-quantum cryptography experiment once they were made aware of the patent and even goes as far to call the patent a “land mine”. Google is not a random someone, but one of the world’s largest companies. Why would they not try to do what a patent is actually meant to achieve, namely to simply license it? Licensing a patent is certainly not beyond Google’s range of financial resources. The post makes it sound as if there is no alternative than to get sued for infringing the patent, while it certainly is possible to negotiate and license it. Nobody forces you to work around it. I can’t comment on US patent law, but at least in European patent law, you can even force the patent holder to license the patent for “fair conditions” if he is on a persornal crusade against you and does not want to license it.
You don’t want to create a standard that’s based on a patent, because a standard would require an ocean of people (besides Google) to license the patent. This is what hurt GIF back in the day, and why standards committees now instinctively avoid anything that smells like it might be patented.
Every once in a whlie, I take a look again at D for my personal projects. It however always comes down to three problems:
I deal with quite a bit of XML. The state of std.xml has not changed in years: deprecated for poor quality, but no alternative in the standard library.
I want to use a library for something, and it turns out to be a C++ library. Using C++ libraries is not really doable from within D.
No promise for compatibility like Go has and C++ provides with the options to force any given version of the ISO standard. My personal projects tend to grow slowly, and I do not want to risk they will fall to language incompatibilities all of a sudden.
So, I continue using C++.
For XML my recommendation is to use arsd.dom https://p0nce.github.io/d-idioms/#DIID-#1---Parse-XML-file
It bears repeating that the EU is working on legislation to force services to inter-operate. I am quite eager to see what will be the result of these efforts.
An often forgotten fact is that when Time-Warner bought AOL back in 2000, one condition was they would have to open up AIM if they ever expanded into video chat. Of course, the Bush administration decided to drop that requirement and so we still have a fragmented chat market to this day. Thanks Bush administration!
Discussion on the org-mode mailing list: https://list.orgmode.org/2021-11-28T20-44-37@devnull.Karl-Voit.at/
Is there a reliable way to bridge Matrix and XMPP so that a matrix user can talk to an XMPP user and vice versa? I see that there will be downsides in a way as only the common subset of both protocols would be supportable, but it would still be nice.
There’s this: https://github.com/matrix-org/matrix-bifrost
I got it running but never got it to connect correctly with my Matrix server and kinda gave up. I’m not sure if it works.
It has bugs, and for obvious reasons new vector isn’t putting much into fixing them, but for some use cases it works. The instance on aria-net is the most stable and has many bug fixes not in upstream
No reason you need to run anything matrix related. Just connect to addresses that go via the aria-net or matrix.org instance. Running it yourself gains you nothing since you’ll just be using it to send messages to matrix.org users anyway.
You mean, as in “it just works”? Looking it up in DNS I am entirely surprised indeed:
$ dig _xmpp-server._tcp.matrix.org SRV
; <<>> DiG 9.16.15-Debian <<>> _xmpp-server._tcp.matrix.org SRV
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6254
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;_xmpp-server._tcp.matrix.org. IN SRV
;; ANSWER SECTION:
_xmpp-server._tcp.matrix.org. 300 IN SRV 0 5 5269 lethe.matrix.org.
;; Query time: 48 msec
;; SERVER: 192.168.1.1#53(192.168.1.1)
;; WHEN: Fr Nov 19 17:55:31 CET 2021
;; MSG SIZE rcvd: 93
I have not tested if there is actually something listening on port 5269 on lethe.matrix.org
, but if it is, it indeed just ought to work. That’s… interesting. So I can assume matrix.org Matrix users will just be reachable via XMPP without further setup? That’s kind of cool. I have no Matrix user in my XMPP roster yet, so I was unaware of this and just assumed the two universes are entirely incompatible to one another. Thanks for rectifying this. I will remember that the next time I talk to someone about XMPP vs. Matrix.
https://github.com/matrix-org/matrix-bifrost/wiki/Address-syntax for the syntax
You are likely to have a better experience with this one https://aria-net.org/SitePages/Portal/Bridges.aspx though
There’s a much longer sordid history here. Back in the early 2000s instant messaging was a big thing, the social media of its time, and there were several competing networks. AOL Instant Messenger was big, if not the biggest, and through tricks like this was locking others out. Creating a monopoly.
When AOL merged with Time Warner (lol) one of the requirements from the anti-trust judge was that AOL agree to allow interoperation between its IM product and others, notably MSN. This made perfect sense to us in the industry; it’s how email worked, for example. It was a condition of allowing the AOL/TW deal to go through.
Did AOL ever fulfill that legal obligation? No. They dragged their feet and kept inventing legal excuses and thanks to the Bush Administration were never really held to implement the agreement. In the meantime IM sort of spiraled into oblivion as a product category (thanks in part because of the lack of interoperability) and AOL/TW went to shit for many other reasons.
Unfortunately this debacle set a big precedent that tech companies could simply ignore US government requirements meant to protect competitive markets.
When AOL merged with Time Warner (lol) one of the requirements from the anti-trust judge was that AOL agree to allow interoperation between its IM product and others, notably MSN. This made perfect sense to us in the industry; it’s how email worked, for example. It was a condition of allowing the AOL/TW deal to go through.
Fascinating. It seems like an early predicissor of the interoperability debate currently going on in the EU. Do you have a source where I can continue reading about this?
I’m sure a lot was written about it, particularly during the merger, but I don’t have it at hand. Here’s one article though: https://www.nytimes.com/2001/01/12/business/fcc-approves-aol-time-warner-deal-with-conditions.html
I remember the situation when _why suddenly disappeared. It’s not mentioned in the article, but his strictly kept pseudonymity ignited an effort to uncover his real name. Eventually, the efforts succeeded, or at least they pretended to. _why’s reaction was the “infocide” the article speaks about, an act of art in its own right in a way. Would _why still be around if these efforts would have led to nothing? It seems difficult to tell. With his aprupt exit from the programming community, he however has ensured he will long be remembered. And he deserves to be remembered, as he provided an entirely different view on what programming is.
In this context, I once saw someone write about _why that he was “not a programmer, but an artist whose medium was code”. Does somebody know the source for this quote?
I wondered if this was Peter Cooper, as I recall he posted some nice summary in Ruby Inside when _why vanished. Seems not.
I can’t find an original source for it, but https://priceonomics.com/why-the-lucky-stiff/ attributes it to Steve Klabnik,
“_why was not a “programmer,” he was “an artist whose medium was code.”
§ 2 German Copyright Act (UrhG): Protected works.
(1) Protected works in the literary, scientific and artistic domain include, in particular:
- Literary works, such as written works, speeches and computer programs; […]
Bold marks mine.
I’ve used mySMTP, mostly because it appears to be the only genuinely European SMTP relay operator (they are based in Denmark). I however found they were blacklisted on some blacklist that caused my e-mails to be rejected at some rare occassions (one of which was rather important and was the reason why I stopped using it).
This is one of those occurences where a technical solution is sought for a non-technical problem. I think Mozilla should rather complain to the EU Commission, especially given that Microsoft already had its fair share from the Commission on browser choice. Otherwise Microsoft will just change the mechanisms needed and Mozilla will have to reverse engineer it again.
Wouldn’t be surprised if Mozilla also did this. Having this workaround in place (and then disarmed by Microsoft) helps build the case.
It reminds me of Epic’s case with Apple. Mozilla may be doing this to force Microsoft’s hand into a scenario they can more easily challenge legally.
These days, I am unsure what Matrix is heading for. This post explains that they want to have VoIP video conferencing and decentralised virtual reality. Then I open the lobste.rs comments and the first thing I see is a comparison to IRC.
It seems as if Matrix’ mission statement today is going far beyond the goal to open up walled text message gardens. From this post it looks as if they want to make Matrix a decentralised platform for everything. The post talks explicitely about the success of the open web and how Matrix strives to copy it, and that makes me think: don’t we already have the open web? It’s built on a protocol called HTTP. Does this mean Matrix wants to replace HTTP?
If Matrix is indeed inferior even to IRC (I cannot judge as I do not use Matrix) in the domain IRC occupies (text messaging), such a wide approach seems doomed.
We’ve always tried to be clear that Matrix is a general purpose comms protocol for realtime data - not just chat. For instance, right from the original launch in Sept 2014 we had VoIP signalling in there too, and did a very basic demo of 3D over Matrix on day 1 too: https://techcrunch.com/video/animatrix-presents-disrupt-sf-2014-hackathon/
Obviously we’re not trying to replace HTTP. Matrix is an API layered on top of HTTP (or other transports) to provide a communication layer for realtime data. If anything it competes with ActivityStreams as a way to link streams of activity over the open web - except with a completely different architecture. The reason for invoking the open web is that we simply want to be the comms layer for the open web: a global realtime virtual space where folks can chat, talk, interact, and publish/subscribe to realtime data of any kind.
W3C simply doesn’t provide an API for that, yet - and if they did, hopefully it might be Matrix.
The open web is not a federated eventually consistent database. That’s what matrix provides. https://matrix.org/ for more info. An update for people following the blog doesn’t cover the introduction.
Text chat is the first application, but matrix can be used for much more.