1. 2

    It’s presented as “new stuff on PDF”, but reading it again gives me a different impression: it feels like “new features in our PDF reader”, but not necessarily any changes in the underlying PDF. It might also do some tweaks to the latter, but the text was ambiguous on the matter.

      1. 6

        Ah, the fun question :) I have several, but here’s my notes on the Todo/Email/Calendar of my dreams:

        “Thought experiment 1: What if I dumped all my inbox into my current todo software (Todoist)? This would suck because it’s a crappy email client. Thought experiment 2: What if I dumped all my todos into Inbox? This would also kinda suck, because it’s missing a few features that a good todo should have…

        That second one told me what I need: Inbox has “do not show me this thing until a date” i.e the first date something should be on your radar and Todoist has “item is due on this date”. For a good todo I need both.

        Imagine a tool whose goal is “what am I doing in the future?”, with each item having a block of text associated with it, and a model that lets you send items to other people. Emails become new items in your “today” list, with a start date of right now and an unspecified end date. You can do the Inbox-style “put off this item until it’s actual usable date”. Most traditional todo items are either “do this by this date”, or “do this at some point”. The former have an end date but no start date, and the latter have neither date, and so need some other interface for exactly where they get put.

        Other notes:

        • You possibly have some local rules for tagging, prioritisation and other info added to incoming email.
        • All calendar items can be a todo with a defined start/end date
        • Being able to relate todo items in a DAG way (many separate graphs allowed, but no cycles within a single todo item graph) to allow for all forms of project sequencing is also awesome. “
        1. 2

          Try Emacs! org-mode can manage a todo list, including showing all of your future tasks/events in a calendar. It also has some great searching and tagging features. You can set up a mail client like mu4e to file incoming mails into an org file. I’m not sure about sending items to other people, org can export to HTML or plaintext so I’m sure you could set something up.

          The only downside is that it’s Emacs, and Emacs has a learning spiral instead of a learning curve.

        1. 4

          So, I’ve been using VSCode for the last uh, couple of years probably. I’ve heard of both of those extensions, but I would in no way describe them as the best parts. The large collection of other extensions and the sheer quality of it (I originally thought of it as “like Atom, except it doesn’t crap itself every couple of days”) is much more what keeps me using it.

          The [expletitve] CLA is however much more concerning from my PoV, and would stop me contributing anything to it, but I’m ok with it being slightly proprietary (I’m typing this on a Mac so I’m not exactly on the ideological purity end of things).

          1. 1

            Is it the use of a CLA in general of Microsoft’s CLA in particular that you find concerning?

            1. 2

              In my opinion, as far as CLA goes, Microsoft’s CLA is fine (I even signed it). But I am categorically against CLA, because it is asymmetric, i.e. not inbound=outbound.

          1. 2

            And hence why I’m still using GMail as my email client for many years. Not because I’m overly fond of anything else about their mail infrastructure, as Fastmail is being used to do the actual “host my domain’s email” problem, but because it’s a better email client. I’m yet to see anything (paid or free) that beats it, which is annoying as I’d like to move to something self-hosted, but not at the cost of usability.

            1. 8

              So many problems here. The really big one though: data and schema integrity. As an app developer, I can reasonably assume that I’m the only one messing with my database. The schema will be the one I added, the data will conform to the implicit assumptions of my app because my app is the one that wrote the data.

              This proposal trashes all that. The database goes from trusted store to “under user control and should be assumed malicious”. I’d have to check the schema was still the schema all the time and put lots of checks on data coming back from it. Or, I could just have my own database and not do any of those things.

              1. 3

                An easy way to do this would be to let the user perform read-only actions against your database. This is already doable for most mobile apps that use sqlite: you take the db, and query it with whatever you want.

                Admittedly, the OP wants both read and write privileges. If you think about it, this is no different from how the file system works today. Your program has no control over other programs running in your computer, and can’t prevent others from poking and screwing around with your files.

                I think shared databases could work with the caveat that there are several sandboxing mechanisms. For example, other apps can’t write to the same tables you’re writing, but only to isolated snapshots.

                If you squint, this could be no different than, say, a git repo, where multiple users can modify the same source by creating branches in the document tree. We already use this model for the source code of a program, so it’s not much of a leap to consider that we could also use this for the application data.

                1. 3

                  If I (app developer) are the admin for the DB and can make sure the user only has read-only, sure. This however is much less useful, as the user loses the benefits of centralisation (e.g. single system to have to talk to). Also, I’ve then got to manage database connections as a fully secured service and make very sure that I’m careful about what users do/do not have access to at that level, as opposed to the more usual “DB access is wide open to anyone authorised and privileges are managed at app level” option.

                  Alternately, if we have MyData services managing the “user is read-only” thing, I then have to make sure I trust all of them. Including the ones where the user themselves is the admin of the service, and then we’re back to square one.

                  1. 2

                    Access control in a DB is still easier than building out and supporting a custom REST API for 3rd parties to use. Plus, with a bit of work, an extension could be added to a Postgres or MySQL or whatever DB to allow untrusted connections that are sandboxed and granted minimal privilege, minimizing the chance of misconfiguration. Plus, it’s not like this is a totally unheard of idea – see, e.g. datasette which exposes read-only query access to a SQLite database and prevents DOSing by limiting the amount of time a query can run.

                2. 2

                  Yes, this sounds like way more work for every single app developer than it would be to add export/export features or even a sync api to your app. A better solution (IMO) would be instead of hosting the data, host a hub for syncing data and schemas, and provide a metric fuckton of client libraries. But you’re still left with the problem of convincing app devs to add support for your thing, and on top of that convincing somebody to pay for it.

                  1. 1

                    I don’t buy this. First off, read only access on its own would be massively helpful. If apps did that, we wouldn’t have to deal with broken, crappy export tasks and could query the data however we want (without having to pull it down through the thin, brittle straw of a REST API). Massively helpful, with minimal support required by the original developers. I’m a huge fan of this – unfortunately it’ll never happen because most apps have a vested interest in user lock-in.

                    It’s also possible to allow writes by putting the model code and integrity constraints in the database itself. If the model code were written in stored procedures, raw inserts/updates were disabled, and more stringent query time/access constraints were put in place than what is normally used for an app user, this could work. 99% of apps work off of one database anyway, so it’s not like it would put significantly more load on it. If there’s concern about bad actors DOSing the system with bad queries or data writes to fill up the disk, just sandbox each query, limiting the amount of time it can run or disk it can use before killing it.

                    Technically, this approach makes a lot of sense, but once again, the reason it’ll never happen is because nobody will give up the keys to the kingdom. There’s no technical magic that makes Twitter/Facebook/YouTube/pick your social media platform worth sticking with – they just own your data. The business model is your data, so they’ll never let you have direct access to it.

                    1. 3

                      If there’s concern about bad actors DOSing the system with bad queries or data writes to fill up the disk, just sandbox each query, limiting the amount of time it can run or disk it can use before killing it.

                      Hmm. I could learn enough c to patch postgres and hope my changes don’t break its data integrity guarantees (and setup database-level views/permissions so that users can’t read each others users private data), or I could focus on building something useful enough that someone feels motivated to help pay my bills.

                      Hell, I don’t even trust my datastores being publicly routable, much less accessible - for all I know the connection handshake is a vector for abuse.

                      the reason it’ll never happen is because nobody will give up the keys to the kingdom

                      People build using the technologies and techniques they already know. As an industry we can’t even adopt exotic correctness techniques like “Get enough sleep and limit working hours”. Inertia is enough to explain this.

                  1. 1

                    Except the example literally didn’t work (Android, Firefox), which makes me distrust this a lot Except now it does. Hmm. I think it’s just that the Obama one didn’t loop and I didn’t scroll down fast enough.

                    1. 4

                      It’s tagged as “Clojure” but it’s not. Janet is a Lisp, but not connected to Clojure other than in that way.

                      1. 5

                        There isn’t a Janet tag, and Janet is heavily inspired by Clojure in terms of both syntax and semantics.

                        1. 1

                          Inspired, but not source compatible.

                      1. 1

                        https://tevps.net/blog I post about a lot of things, mostly technically related and things I’ve built.

                          1. 1

                            Link is broken. Correct link is https://tyler.io/so-uh-i-think-catalina-10154-broke-ssh/ or apparently not. It’s all broken, but the one here is the one from the site.

                            [edited] Ah Slashdot effect https://mobile.twitter.com/tylerhall/status/1245029223804395521

                            1. 2

                              Yeah, I think I’m really glad I decided not to go down that route. I would not have gotten anywhere near as far (even as someone who practically breathes code and has been doing this for quite some time…), and trying to get myself to a place where I might have been able to would have just been frustrating quite frankly.

                              1. 5

                                If you want lower-level work on the micro:bit you can either use PlatformIO for C, or Rust can be used as well (see the MicroRust book)

                                1. 2

                                  A lot of missing items there eg

                                  • How does it deal with a clash where two clients have written to the same state?
                                  • How does it deal with dependent state (eg take money from account A to B) in the face of potentially other actors making changes to only a subset of the dependencies?

                                  There’s a bunch of other fairly standard distributed systems problems there as well, and just dealing with offline caching of reads/writes solves very few of them. OTOH, maybe they’ve got a good solution to these, but I’ll be happy with it when I can see the code.

                                  1. 1

                                    Doing 37 (simply rm -fr && git checkout) is tempting, but then you never learn. I’ve probably wasted a bunch of time doing so, but I can count the number of times I’ve done that on one hand and it’s a good exercise getting yourself out of that sort of hole (and improving your ability to identify exactly the sort of hole you’re in)

                                    1. 3

                                      Yeah, nobody’s ever gonna learn Git that way. People think they can get away with it forever and then somebody’s checkin bites them in the ass and they’re like “Welp, guess I better read the documentation and understand how this abstracts everything.”

                                      1. 1

                                        I agree, git doesn’t fuck up. Build systems do.

                                        Knowing git is inescapable (as the standard SCM), whereas build systems are diverse and wrongly configured. git clean -dfx ftw.

                                        Sure, git’s nomenclature is alienating. Case in point, that command should be git clone, as that’s what git calls checkout, because checkout means reset, and reset means something else… But as compulsory knowledge, the learning curve makes no difference in the end.

                                      1. 2

                                        Home (random server box):

                                        Remote (Scaleway):

                                        I used to host my own email but got fed up with DKIM et al, and now use Fastmail. DNS is on Route53, mostly because of laziness.

                                        1. 1

                                          fyi - macos supports timemachine via smb these days (since 10.14.x I think? Certainly 10.15.x), and samba has the proper bits in more recent releases (since 4.8?). No need to run netatalk any longer, if you are only using it for that.

                                          I run samba on FreeBSD with zfs:

                                          [global]
                                              ea support = yes
                                              vfs objects = catia fruit streams_xattr zfsacl
                                          
                                          [TimeCapsule]
                                              comment="TimeMachine"
                                              path=/tmx/timemachine/%u
                                              valid users=user1,user2
                                              write list=user1,user2
                                              read list=
                                              writeable=yes
                                              fruit:time machine = yes
                                          
                                        1. 3

                                          Still lacking: actual steps done to get the values for syscall counts so we can fix/reproduce his numbers. If you’re going to complain about a metric, let people at least know how you gathered it so we can maybe do something about it.

                                          1. 3

                                            I’ve been using AWS’s Route 53 recently for my personal hosting. For low volume DNS (<1 million queries per month) it’ll be less than $1/month per zone. Notably I don’t use any of their other services as they’re not exactly competitively priced, but Route 53 is pretty good.

                                            1. 30

                                              I assumed that title meant “IPv6 breaks 30% of networking setups”

                                              1. 4

                                                Same here. I suggested a new title of “IPv6 Adoption Breaks 30%” to help clear up any confusion.

                                                1. 4

                                                  Breaks what? Took me a minute to realize what you really meant was “30% of Google users now use IPv6”

                                                2. 1

                                                  I also assumed this, as I just had an issue on Friday with an AWS VPN and OpenVPN on Ubuntu 18.04 where I had to disable IPv6 to get it to actually resolve anything.