Threads for mt

  1. 4

    I’m wondering what Joel’s interest is in this? He has always been a great marketer of his own products but this a standard and not a product. Is this his “retirement” gig after making a gigaton of money from selling Trello (and FogBugz)?

    1. 4

      It’s seems to be created by hash.ai, which he’s affiliated with. I’d assume they are interested in getting more functionality (i.e. widgets) for document authors/users for free.

    1. 1

      All those engineering struggles to fit in 100mb, which Apple later increased - not worth it =)

      1. 2

        It allowed them to continue iterating and working as usual for most teams while limit was not increased, otherwise they would hit it pretty soon, as far as I did understand it.

      1. 2

        The rent-to-own model is interesting.

        1. 4

          I’m still not buying that argument.

          Just to start with this, to make it very clear: Federation does not necessarily mean multiple implementations. There is no reason federation would require that.

          Now, I am totally on-board with the fact that XMPP is terrible, but…

          • Signal is basically not changing anyway so they don’t even really need that property.
          • Matrix is achieving a decent amount of ecosystem change by providing canonical clients. (Not that Matrix is particularly great but at least that aspect seems to be working well.)

          So it really seems like, if you want to make a federated system (with alternative implementations):

          1. Make an implementation for every platform, so you can point general users to one which gives a good user experience. It will also be a standard for alternative implementations to be measured against.
          2. Don’t introduce a lot of changes. This actually seems to be plausible for messengers because they are very well known by now. It’s also to some degree required because if your system is too much of a moving target alternative implementations may not be able to keep up.
          3. Make it easy for people to figure out how to create an implementation and to be notified of changes they need to make.
          4. Make it easy for people to figure out when an implementation does not support something. To make this more manageable, go for a simple system with increasing version, avoid add-ons and optional features. (Or, you know, just rename things into Email 2.0 or whatever.)
          1. 3

            As soon as you let other people host server software, you get several implementations, at least “the old version” and “the new version”, because you cannot force people to update (nor can you expect that everyone would update in any reasonable amount of time).

          1. 3

            It befuddles me to speak about IndieWeb as a single technology, as it is a more of a movement that builds several technologies: micro formats, Webmention, Micropub, IndieAuth, PuSH, et cetera. I always thought about it as an a la carte: you can take the building blocks you want and forget about the rest. In this light, it’s obvious how one can arrive at tens of plugins needed to do the ‘IndieWeb thing’, but do you actually want the whole shebang?

            Edit: and the main point of the IndieWeb movement is to own your content, i.e. post on the website that you control. So posting on a standalone WordPress instance (or whatever) is still doing IndieWeb in my eyes.

            1. 2

              Yes, using your own domain is already IndieWeb, so you can’t remove IndieWeb support from your site as long as it’s still on your own domain, just remove support for a few building blocks. 😊

            1. 1

              As an example of type of answers I would enjoy the most: graphite (http://graphiteapp.org/), because you’d probably want to collect a lot of various metrics, from host CPU/MEM to app metrics (e.g. time for database query).

              1. 2

                To rehash a classic,

                find . -iname '*.txt' -print |
                xargs cat |
                tr -cs A-Za-z '\n' |
                tr A-Z a-z |
                sort |
                uniq -c |
                sort -rn |
                head -n 10
                
                1. 5

                  You’re making the same mistake Drang did. The point of the Programming Pearls column wasn’t to compare Literate Programming to shell scripting. Knuth was just giving an example of what literate programming looked like. The actual example Bentley gave him wasn’t the point, so talking about how “Unix Way is better” for that problem misses the point.

                  Similarly, the point of this article isn’t that this problem is easy in C++. It’s that it’s clearer and safer in modern C++ than in traditional C++. That you can do it more cleanly in bash is besides the point.

                1. 3

                  For those who want to learn more about estimations, MIT had published ‘The Art of Insight in Science and Engineering: Mastering Complexity’ (available freely online), which is all about skill of approximation.

                  1. 16

                    Today I wrote a prototype of a project in TS/deno (circa 500 lines), a HTTP server that reads/writes files from FS and renders HTML.

                    My current feelings are positive. I especially like ‘single-binary’ idea taken from go, and the ability to run deno install my_binary_name https://my-website.example/script.ts without publishing anything anywhere, aside from doing rsync to my web server.

                    Other things of note:

                    • Promises everywhere by default are nice, ditto await;
                    • you don’t need to import tons of core modules to start coding (e.g., fetch is already available, Deno.writeTextFile(pathStr, contentStr) doesn’t require require), less boilerplate for trivial things;
                    • self-contained imports make it easier to write one-off scripts (no need to create separate package.json and run/rerun npm install everytime you add/update deps);
                    • deno run myfile.ts will compile it before running, which is great for one-off scripts (less tinkering with build setup, more tinkering with the code), but for deployment you’d probably want to transpile it to JS to avoid slow start;
                    • it seems like TS version is fixed to deno version, which makes it easier for me (less versions of software to think about)
                    • I suppose there’d be diamond dependencies problems on larger projects, where you’d need to coordinate the same versions somehow (in documentation there is a suggestion to make deps.ts, which imports and re-exports external dependencies).

                    In summary, it feels fresh and suitable for small tasks right away.

                    1. 7

                      Mine is not profound at all and quite possibly wrong, but nonetheless: write in JavaScript, rewrite in Rust.

                      I’ve found out that every time I want to write X in Rust, I mess up somewhere along the way; writing in JS allows to prototype quickly and converge on design that I’d be able to rewrite in Rust (or throw it out entirely, saving borrowck time).

                      1. 1

                        If only JS had Rusty enums. I’m porting my old JS library to Rust right now, and I’ve realized that instead of temp objects with boolean flags, optional fields and extra methods, I can just model it neatly with a single enum.

                      1. 3

                        One of the problem of Schema.org in my opinion is that it makes information invisible. If you already have ISBN for review, then show it to me, the reader, too! Microformats are better in that way: you mark up what’s visible on the page.

                        1. 9

                          If you don’t want to bother with .PHONY targets, tabs, weird syntax, and arcane built-in assumptions about build processes, check out Just, which is Make-inspired, but firmly focused on running commands, not producing files.

                          1. 10

                            I have a really hard time imagining what problems this solves that aren’t already solved by “a bin/ directory”?

                            1. 1

                              For example, just recursively walks up searching for justfile, so you can be deep in the project and still run just build, without clobbering PATH.

                              Consider the case where there are multiple projects, you’d have to use relative paths, or insist on unique name for scripts across all projects, or constantly reset PATH to project’s bin.

                              1. 2

                                I imagine you could use direnv for this, too—you could configure it so that whenever you enter a directory within ~/Projects/someproject it adds ~/Projects/someproject/bin to your PATH, and it would undo the change if you entered some other hierarchy. If you’re collaborating with others then I imagine that getting them to install Just would be easier than getting them to install and configure direnv, though.

                                1. 1

                                  I solve this problem a simpler way; I always have a shell open in the root of any project I’m ever working on, so bin scripts are very easy to use.

                            2. 7

                              tabs

                              yeah because that’s the problem with Make, it doesn’t use spaces.

                              1. 1

                                I have seen a few people mention Just. It looks like, while it does have a concept of dependencies, it doesn’t have a way to track if that dependency is satisfied or not, rather it just always runs all dependencies. Does it have a way to detect if the dependency is satisfied? In a Makefile, this is where the file timestamps play a role (and as I showed, we can use this even for tasks that don’t produce files).

                                1. 1

                                  AFAIR, Just always runs the dependencies, it is simpler mental model. This issue recommends using make in tandem with just when you want incremental runs.

                              1. 26

                                Use Algo, which is a set of ansible scripts that properly sets up new virtual machine in various cloud providers with Wireguard and generates profiles for mobile devices as well.

                                1. 1

                                  Why would you want a column with derived data mixed in storage with primary, non-derived data (apart from FTS indexing)?

                                  1. 2

                                    If you have a need for pre-computed / “cached” data, especially with a workload of few writes and lots of reads, generated columns should help simplify your application / server-side code a lot.

                                    1. 1

                                      That’s a quote that doesn’t illuminate the potential use-cases enough for me.

                                      For example, could I use it to run another query at write-time using the data being inserted?

                                      1. 5

                                        For example, could I use it to run another query at write-time using the data being inserted?

                                        No, I do not think so. You have access to these columns at Select, Update, Delete time

                                        For more use cases, perhaps one could also look at “oracle virtualized columns”, those have been around since 11g, and there are probably more blog posts/examples of them.

                                        Imagine, that in PostgreSQL, you can always use select col1, col2, my_transform_function(col1,col2,col3) from some_table_1

                                        The value of the 3rd column is whatever comes out of your transformation function. Now, that’s pretty useful on its own and most people in PG world would use that (when for example exposing one field of a complex json stored in column 3, transformed based on the data of the same row, sitting in columns 1 and 2).

                                        For example, in the world of complex derivatives trade processing, trade life cycle json objects can have like 500 to 1000 fields nested to like 10 levels of hierarchies…., similar complexities exist in clinical or scientific data/experiment management….
                                        Clearly, if you want to filter or join on the fields inside those complex objects, you will have to at some point in time, extract those values out…

                                        So, if want to offer a filtering criteria, by one of those nested fields, what would I do?

                                        • I have an option to parse that field out during insert time, and have it stored as separate column. But then, I have to understand the details of those objects, at ingest time. And every time they change something, I have to re-injest the whole thing (which are terabytes and terabytes of data…).

                                        • I have an option to apply my_transform_function at select time… But, then, I cannot use database engine to efficiently filter rows out that do not satisfy a particular criteria on the transform results, and I have get my database engine to repeate this work for very select that’s being sent to my Database…. Inefficient.

                                        • a much better option is pre-compute the value of the virtual column on first insert, and then on any update to that row (and have the DB figure out when to re-compute… as it knows when inserts/updates happen)..

                                          Then ask my DB engine to create an index (some times called function index) on that column. And from there on, the selects, joions (and updates/deletes) with the filter on the precomputed column, will be more efficient. That’s the value of the generated columns

                                        • you could also create a form of materialized view (sort of like a projection of the table (or multple tables) – whith this data extracted.. but that has its own complexities.. As materialized views are ‘snapshots’ in time and are pretty demanding. (can also be done by hand-coding triggers, probably the least efficient option to maintain materialized views).

                                        The generated columns in PostgreSQL 12 (and in other db engines), have a number of limitations so they are not like the true columns… for example you cannot partition based on those columns, also probably they cannot participate in all types of indexing strategies or in materialized views (although I have to read more about it).

                                        but overall this is a very useful feature for Data hubs, that cannot anticipate at ingest time, all the possible query/filtering/join criteria that will be needed for serving selects (or deletes during archiving, as another example) for their data sets.

                                    2. 1

                                      Perhaps you want to store a JSON payload exactly as you received it from an external API, but you’d also like easy access to the data within… or maybe you want access to a computed result of that JSON.

                                      1. 1

                                        I think that complex indexing expressions (like FTS indexing as you mentioned) is a pretty good use-case. For example, I need to sort a lot of stuff depending on the day (but regardless of the time of the day), so I have to index on this ugly big expression “date_trunc('day'::text, timezone('utc'::text, post_time))”. It would be nicer to have a generated column for that, with an index on it.

                                      1. 7

                                        Stop recommending GPG.

                                        1. 7

                                          Stop recommending to stop recommending GPG. GPG is difficult and absolutely has it’s sharp edges, but it is also “standard”. I use it every single day in both a corporate environment, personal use, and a ton of places in between. The article you link fundamentally misses one of the main reasons almost everyone uses GPG, encrypted email. I do a ton of vulnerability disclosures and mailing the security@COMPANY.WEBSITE with a GPG key and a vulnerability notification is the only consistent way to safely get my communications across. I’ve dealt with s/MIME, home brewed crap, third-party web portals, and a ton of other things. GPG is the only usable thing in the space that I have and I’ve never seen a successful migration away.

                                          1. 4

                                            Stop recommending to stop recommending to stop recommending GPG.

                                            For one thing, it’s not as simple as whether the tool is “a good thing” or not. If your goal is to use an existing email address with cryptography, there’s probably no better way to go about authenticating a message than what GPG does. If you really do need it, then obviously you should use it. If you’re able to employ it with enough success that getting error messages is actually a sign of intrusion, rather than being seen as a sign that you messed something up, then it’s doing its job.

                                            The question, of course, is whether running cryptographic secure communications over existing email infrastructure, or something very much like it, is actually a requirement that most people have. It is for you, because you’re constantly sending unsolicited messages to people you have no preexisting connection with. So the value of using “standard” communication channels is greatly heightened, compared to people who mostly communicate with friends, family, and coworkers, and probably prefer using communication channels where both sides have to open a gateway to contact each other (look ma! no spam!). If you’re using a communication channel that requires such an explicit opt-in, then that opt-in stage is the perfect place to perform key exchange while you’re at it.

                                            Also, a lot of use cases where PGP is currently employed would be better served with other tools. For example, if I was God King of Debian and had the chance to redesign their package management system, I’d probably build their package signing on top of libsodium instead. It’s actually intended to be embedded in other applications: it has a far better API, a far simpler design, and there’s really no point in using a “swiss army knife” CLI when it’s being invoked through Debian-developed wrapper tools approximately 100% of the time anyhow.

                                            1. 6

                                              There is a difference between categorically saying “stop recommending GPG” versus “check to make sure GPG is what you need and that there isn’t an alternative”. I stand by my first negation, GPG has it’s place.

                                              Whether I or you likes it a lot, the vast majority of the corporate world in the US (and outside) uses e-mail as it’s primary forms of communication and because of that I have to do things like deliver reports, exploit PoC’s, breach notifications, etc. that are absolutely sensitive. If my only form of contact with those organizations is e-mail, then what exactly are my options? Because GPG is “standard” for all of those use cases. Any mature organization I work with has had at least one security point of contact with a GPG key that can be used for further confidential conversations. I’d love to get rid of email, but let me tell you, if you try force your preferences onto another organization you are going to have a bad time.

                                              I’m in totally agreement about package management signatures being a not so great place for GPG, but that’s why I mention sharp edges. It’s not a swiss army knife, but I think much of that is the fault of apt/dpkg as it is GPG’s.

                                          2. 4

                                            What do you recommend in place of it?

                                            1. 7

                                              Check this list out, it seems pretty good https://blog.gtank.cc/modern-alternatives-to-pgp/

                                              1. 1

                                                I hope saltpack gets more attention. It seems like the perfect drop-in replacement.

                                                1. 1

                                                  I don’t like that keybase seems to be the only thing developing/pushing it. Also:

                                                  What state is the project in?

                                                  It’s a draft and being tested in the keybase alpha app.

                                            2. 2

                                              It’s one of only a few tools NSA said they couldn’t break. They were breaking many other things people are using. Using a subset of it to just encrypt and decrypt files containing messages is easy enough for even lay people. Can be scripted, too.

                                              Given NSA > most other threat, using GPG will probably handle them, too. So, I prefer it for proven effectiveness. The attackers will probably get me via Firefox before it.

                                              1. 1

                                                Using a subset of it to just encrypt and decrypt files containing messages is easy enough for even lay people

                                                This is misleading, gpg interface is notorious for ease of misuse.

                                                Given NSA > most other threat

                                                If this is your threat model, then it’s more about opsec than specific tools. Check out grugq guide on operational GPG for email, for example, it’s quite tricky to get it right every time.

                                                1. 1

                                                  “gpg interface is notorious for ease of misuse.”

                                                  People been repeating that for years instead of mitigating it. I wonder why given how easy it is. You create a cheat sheet with just a few items on that one, add the good options for key gen phase, and do something about the painful encrypt command. A shell script or something so they can type less stuff in. Then, you’re good.

                                                  “If this is your threat model”

                                                  My threat model is people breaking crypto. I also prefer vetted solutions. The NSA vetted this one in Snowden leaks. Most others they broke. If it causes them problems, it should work well against the lesser attackers most people are concerned with.

                                            1. 2

                                              When the dimensions and settings of the medium for your visual design are indeterminate, even something simple like putting things next to other things is a quandary. Will there be enough horizontal space? And, even if there is, will the layout make the most of the vertical space?

                                              This raises the question, should web pages in 2019 still be putting things next to each other?

                                              Every site I can think of that becomes unsuable on my smartphone or in my half-width tiled browser windows is one that attempts a 2- or 3-column layout. When I reorganized my own personal site from multiple columns to just one, I was able to delete half of my CSS.

                                              1. 3

                                                Isn’t this what media queries are for? Not trolling, genuinely curious.

                                                1. 3

                                                  Check out the other articles on the linked site for the case against media queries. TL;DR kinda yes, but media queries make it hard to build reusable components, because they’re inherently global and thus hardly compose (while you can nest the linked pattern, and it would work seamlessly)

                                                  1. 1

                                                    I believe so, but it seems many sites don’t implement them flawlessly.

                                                1. 7

                                                  IndieWeb has interesting and somewhat pragmatic approach to this issue: make everyone a Brand (I.e. personal domain) and then spray lightweight federation on top (WebMentions, microformats, etc.). As for auth they propose IndieAuth which is almost oauth + personal domains. TL;DR each correspondent sets up once their website and can effortlessly communicate with anyone after that.

                                                  1. 6

                                                    I like the idea and the ideals of IndieWeb, but I think the bar of “set up a personal domain, and make a webpage on it with these microformats” is still a little beyond people outside of tech.

                                                    1. 4

                                                      I think there are actually three problems folded in one: a) the technical difficulties of registering and setting up domain (which is potentially solvable by nice UX), b) difficulty of putting something on the Web (servers, maintenance, etc.), and c) the costs. Domains cost money as do servers. And paying $6.25/month ($5/mo for VPS; $15/year for domain) is a lot more than $0 to use Twitbook.

                                                      So I think that IndieWeb is correct on focusing at people who can do b) and already put off by Big Brands to bear c). OTOH, the Web should be more simple and accessible for non-tech people, that I agree.

                                                      1. 4

                                                        As someone who can afford a $5/mo VPS I’d love to have a decent framework for easily adding users as part of the Fediverse. I’m talking about something more personal than just being a node in something like FreeNet, and more “secure” than letting randos get a shell account. Something like a GeoCities page with defined storage limits, and an easy-to-use interface for web publishing, mastodon etc etc would be great.

                                                        Maybe it already exists?

                                                        1. 7

                                                          I’m kinda working on something along this idea, but hadn’t officialy anounced anything yet. The basic idea is allowing to upload HTML files and enhance the experience by providing HTTPS, posting interface, generating various feeds from user files, etc.

                                                          Edit: allow me to expand a little more, this project is about developing a hub of sorts. So the more techy user can deploy it and provide indie experience for friends and family.

                                                          1. 2

                                                            That sounds great! Please update us when you feel it’s ready for the world.

                                                            1. 1

                                                              This sounds very cool! +1 on the “please update us when you feel ready to”

                                                      2. 2

                                                        Re: why I think it’s pragmatic: any identity system relies on some authority. In case of DNS+TLS it is spread between domain registrars and CA (letsencrypt is good). New system would necessitate new authority you’d have to trust. Meanwhile the Big Brands have vested interest in keeping old system working and trusted.

                                                        The other benefit of IndieWeb approach is that it keeps information visible and accessible, as opposed to some “standard” JSON. That leads to easier adoption (you can send webmention by hand) and keeps focus on making it work, not debating ideal JSON schema by committees.

                                                      1. 3

                                                        In my current job, we have TFS CI create and publish a package to Octopus, which depending on the environment is automatically or manually deployed. The Octopus jobs include DB migrations etc. It’s very much a “pet” setup, and provisioning, DNS, load balancing etc is all done manually.

                                                        In my previous role, we used TFS CI pipeline to build and publish a package to Artifactory, then used TFS CD with a combination of powershell scripting and VRealize to spin up AWS machines and load balancers, deploy, configure DNS etc. Very much “cattle”, though we’d configured the pipeline to allow us to treat the servers in “pet” mode which was useful for minor changes. Because we could configure DNS it also let us do blue/green deployments.

                                                        Although VRealize was, how can I put this… a little rough, in the end we ended up with (almost) single-button deployment for our Ops team - the only thing not done when I left was certificate installation. It was technically possible but the org’s security team had to be convinced that it was secure enough.

                                                        1. 1

                                                          What is TFS?

                                                          1. 2

                                                            I guess Team Foundation Server of the Microsoft.

                                                            1. 1

                                                              What is TFS?

                                                              Microsoft’s source control/CI/CD suite of products. Though I think they’ve rebranded it somewhat recently.

                                                          1. 1

                                                            We have identified (more than a decade ago, in fact) a disciplined programming style that uses existing type systems in practical, mature languages (such as OCaml, Scala, Haskell, etc., and to an extent, Java and C++) to statically assure a wide range of safety properties:

                                                            • never dereferencing a null pointer or taking the head of an empty list;
                                                            • always sanitizing user input;
                                                            • using only in-bounds indices to access (dynamically allocated) arrays of the statically unknown size.