1. 16
  1.  

  2. 5

    I dunno about this, machines that are easy to change are hard to use and vice versa. Adaptable machines tend to end up with lots of options which all have to be configured correctly to be used at all, and so have a higher barrier to entry. Making a highly flexible system well can multiply the amount of effort involved by orders of magnitude; the most effective way to combat this is to limit scope. If you don’t, you often just end up with lots of mediocre solutions instead of a few highly-polished solutions.

    Malleable machines that you can dig into and get your hands dirty with are great, in the hands of someone who wants and needs to get dirty with them. However, all I ever really want from, say, my filesystem driver, is “it does what it says on the tin, quickly and with a low error rate”. I really neither need nor want to recombine workflows and experiences in my cellphone’s dialer software, and–

    –crap, I just thought of some cool things I could do with my cellphone’s dialer software if it had an API with a low enough barrier to entry.

    1. 1

      What cool things? I want to know!

    2. 3

      I love the big idea.

      For points 1, 2, and 3 I would call out are unix shell pipelines and decade old applescript, and somewhat SQL queries. But they’re not so easy to change, and they don’t mix and match with each other. I’d say a huge obstacle is that it’s easy to work with data inside a particular language/framework, but it’s not easy to move data between languages.

      I have a very personal use case I call the “Cary Grant” problem. I want to watch all the Cary Grant movies I haven’t seen, ordered by top IMDB score, for the least money possible.

      That means I’d need to do an inner join on 1. the movie collection of my nearby libraries, 2. what netflix has right now, while filtering out my personal list of movies I’ve already seen in a text file, and order by the IMDB score. I tried this for a few weeks, nobody wants to share their data.

      I have plenty more ideas along these lines. I recently downloaded several years of my fitbit data and it’s a huge mess of json that lacks documentation and has differing formats for older years.

      For point 4, I want a fitbit watch where the data never leaves my own servers, same for my location data and which cell phone towers I use.

      Do you have interesting use cases that would require joining on disparate data sources?

      1. 2

        Yes! I think about this a lot. I’m kind of obsessed about collecting my data and using it and it’s insane how hard is it to get hands on it in order to do something with it.

        differing formats for older years

        Yep, it’s one of the big issues I have. I’m consolidating my efforts in the HPI package, which is responsible for normalising and potentially arbitrating data.

        Do you have interesting use cases that would require joining on disparate data sources?

        Yep! For me, one common usecase for joining data is various well-being stats. For example: I’ve got dashboard, for sleep, exercise, food, etc. I’ve posted some screenshots here.

        Another tool along these lines l I’m working on is promnesia, it’s a browser extension which unifies annotations and bookmarks from different sources.