1. 4

    There are days when I just want to ON TRUNCATE CASCADE and watch the world burn.

    1. 2

      :P

      Are you Little Bobby Tables? https://xkcd.com/327/

    1. 1

      Can you do a comparison of litmus with ecto? Ecto does not need to be used with a database, and it has some validations built in, and it is very easy to create your own.

      It also looks like litmus only returns a single error at a time? Would it be possible to give all errors, so that we know how to fix everything in one go instead of continually doing validation just to uncover a single step at at time?

      1. 1

        I know we experimented early on with Ecto validation, but I can’t quite remember why we decided was wasn’t ideal for us. I know at the time Ecto was not yet at v3, and perhaps the validation facilities it has are better now? I know that we for sure wanted clear error messages, and validation as soon as possible in the request lifecyle, and those were motivators for building our own.

        That’s a good suggestion, and something we could definitely do in future versions. At the moment, Litmus is not-so-general as it was built specifically for our use case: we are validating user input in their API requests and we want to send back a single error message for the first thing wrong.

      1. 1

        This is an interesting read, but additional business context explaining the performance requirements would help readers make sense of some of the decisions. I’ve never used this product so it’s unclear to me why 640μs or even 19ms in the worst case for inserting members into these lists is not fast enough. Also, why does this list need to be sorted. Why would a map not work for this usecase?

        1. 21

          The title of the article is a heavy “editorialization” (more like misrepresentation) of the actual contents, a.k.a bait an switch. The tl;dr quote from the actual text:

          Q: So when will these [generics, errors, etc.] actually land?

          A: When we get it right. [… Maybe, let’s] say two years from now […]

          No slightest mention of “coming to Go[lang] in 2019” for those.

          1. 3

            I agree. I clicked on it to see the specifics of how they did generics and modules. Especially generics since it was a contested issue.

            1. 4

              Modules are usable today: https://github.com/golang/go/wiki/Modules

              The additional work is turning it on by default and adding things like the central server.

              For generics the proposal is here: https://go.googlesource.com/proposal/+/master/design/go2draft-generics-overview.md

              And apparently there’s a branch you can test it with:

              https://github.com/golang/go/issues/15292#issuecomment-438880159

              1. 2

                Go modules are more “unusable” in the current state than “usable”. It’s looking to me like it didn’t solve any of the real problems with Go depedencies. It still doesn’t vendor local to the project by default, there is little or no CLI support for basic dependency management tasks (like adding new deps, removing old ones, updating existing ones), and there’s no support for environment-specific dependencies.

                At this point they just need to scrap it completely and rewrite it using crate and yarn as examples of how to solve this problem. It’s frustrating that this is a solved problem for many languages, but not Go.

                On the plus side, I think it speaks the the strength of the language to be so prolific with such poor dependency management tooling.

                1. 3

                  Completely disagree. I’ve converted several projects from dep and found modules a real pleasure. Its fast and we’ll thought out.

                  Vendoring has major problems for large projects. It takes me 30 seconds to run any command on Macos docker because there are so many files in this repo I’m working on. Sure that’s a docker problem, but it’s a pain regardless.

                  With modules you can get offline proxying server capabilities without the need to vendor the dependencies in the repo itself and if you really want vendoring it’s still available. And this is something they are actively working on. A centralized, signed cache of dependencies.

                  Also the go mod and go get commands can add and update dependencies. You can also change import paths this way. It’s underdocumented but available. (do go get some/path@someversion)

                  Not sure about env-specific dependencies… That’s not a situation I’ve heard of before.

                  There are a lot of things go got right about dependencies: a compiler with module capabilities built into the language, no centralized server, import paths as urls so they were discoverable, code as documentation which made godoc.org possible.

                  And FWIW this isn’t a “solved” problem. Every other solution has problems too. Software dependencies is a hard problem and any solution has trade-offs.

                  1. 1

                    I’m glad it’s working well for you! This gives me hope. I’m basing my feedback on the last time I messed around with go modules which was a few months ago, so sounds like things have improved. Nevertheless I think it’s a long way off what it should be.

                    By environment specific dependencies, I’m referring to things like test and development libraries that aren’t needed in production.

          1. 2

            git config --global alias.fixup 'commit --amend -C HEAD' once and then a git fixup after adding new changes has never failed me.

            1. 5

              That indeed works if you only want to change the last commit; this automatically merges changes into multiple (“draft”) commits.

              1. 2

                I hadn’t heard of the -C argument for git commit before. It seems its long form is --reuse-message, and it is one of three related flags that are useful with --amend:

                • --no-edit – use the same commit message as the amended commit; update the author and timestamp
                • -c <commit>, --reedit-message=<commit> – use the same commit message as the specified commit; update the author and timestamp
                • -C <commit>, --reuse-message=<commit> – use the same commit message, author, and timestamp as the specified commit
              1. 9

                I’ve done this with teams before. Always regretted it. Flaky Tests should probably be called Sloppy Tests. They always point to a problem that no one wants to take the time to deal with.

                1. 2

                  Flakiness isn’t always test flakiness. We also have infra flakiness (or intermittent infra failures) that are hard or impossible to solve. With respect to test retries, I somehow agree with you, but as always, this is a matter of tradeoffs: do you prefer a faster product development or an extremely reliable product with developers spending time trying to fix issues that are false positives most of the time?

                  1. 1

                    I haven’t tried this retry approach, but my gut reaction is to agree with you. Reading the article my first reaction was “why not just fix the flaky tests”?

                    If the tests fail sporadically and often, how can you assume it’s the tests at fault and not the application code? And if it’s the latter, it’s affecting customers.

                    1. 1

                      When new software is running on pre-production hardware, the line of delineation is not so easy to draw. Flaky tests could be one or the other, and filtering them out (based on the cause being one or the other) is not exactly straight forward.

                    2. 1

                      It sounds bananas for GUI development, but it can make sense for statistical software, where some failures are to be expected. Maybe failures are unavoidable for some GUI environments? I can’t think why off the top of my head, though.

                      1. 1

                        The biggest difficulty is that flaky tests in end-to-end environments involve basically every part of the system, so any sort of non-determinism or race condition (timing is almost always at the core of these) can be involved. Thank god Javascript is single-threaded.

                        I once had a test fail intermittently for weeks before I realised that a really subtle CSS rule causing a 0.1s color fade would cause differences in results if the test was executing ‘too fast’

                      1. 2

                        As I always say—the best code is no code. Applies to infra in this case.

                        But if you want to run a k8s cluster on the side for the purpose of learning about k8s, then that makes total sense to me.

                        1. 5

                          Neither this post nor https://lobste.rs/s/dtqqih/chrome_now_strips_common_subdomains_e_g has a ready link to why the Chrome team wants to do this. As far as I can tell (and please chime in if you know more), it boils down to an attempt to remove (what the Chrome team considers) redundant information from the UI in order to highlight more critical parts of the origin. It’s my understanding that the www. is targeted here because many pages automatically redirect to/from www. or mirror it entirely, making it not “user-controlled.” It would be nice if someone from the developer relations team explained this change a bit more (with examples).

                          A relevant link is https://www.chromium.org/Home/chromium-security/enamel#TOC-Eliding-Origin-Names-And-Hostnames.

                          1. 5

                            I think they’re trying to blur the line from URL and AMP URL. “User Agent” my butt, it’s a Google Agent now.

                            I just don’t get it (AMP, Omnibox changes). They’re basically lifting content from publishers, calling it a “privacy benefit”, and going through W3C to make it “standard”. I find this GIF downright misleading.

                            I think these changes are very frustrating, even as a Firefox user. Can you imagine if my blog showed pre-loaded Google Search Result Pages (for 1k most common words, or something, &c), changed the URL to show google.com and avoided any traffic to their domains? They’d sue the shit out of me if I didn’t cease & desist. (Didn’t some guy do that? And they blocked his entire domain?) But they have leverage over publishers, who don’t seem to care.

                            1. 3

                              Most sites redirect but some have completely different things on www and the root. there were a few examples posted on a thread on reddit.

                              1. 2

                                edirect but some have completely different things on www and the root. there were a few examples posted on a thread on reddit.

                                I think some ntp servers will display the webpage on the www and the actual ntp on the root.

                                1. 1

                                  I agree with you, I’m just trying to suss out the particular motivations behind this change given that there isn’t much context around it.

                                  1. 1

                                    That’s true, but enough users already assume that www and the root are equivalent. Sites like that are already broken.

                                    And the NTP pool moved their main website off of www.pool.ntp.org and onto (www.)ntppool.org because people kept hitting random third-party servers when they expected to get the pool’s website.

                                  2. 1

                                    I also haven’t seen any argument, convincing or not, about why they want to do this. The irony in it all is that google.com redirects to www.google.com. You’d think if they wanted to get rid of an “ugly” www that they would start with their own domain.

                                  1. 1

                                    Everyone has a right to be offended by whatever they want, but they don’t have a right for other people to give a damn.

                                    This is a case where people give a damn.

                                    Just making the change and moving on with our lives seems like the obvious solution, and should have been done from the get-go. No need to over-complicate things.

                                    1. 7

                                      I think DHH has a lot of accurate points here, but I think he’s wrong about not needing to write SQL or have an understanding of the database technology supporting your application. With applications that store few data in their database, then I agree it may be possible to have a completely opaque perception of the database technology. However, I don’t see a way to forgo knowledge of the database layer and avoid writing SQL for things like running migrations on tables with millions of rows.

                                      As a simple example, creating an index on a large Postgres table without the CONCURRENTLY keyword is a surefire way to block reads and writes on a table while the index is created and cause downtime. I don’t work with ActiveRecord, but it appears there is an abstraction for this keyword (algorithm: :concurrently). But how would you know to use this option if you don’t have an understanding of the database and its transactional isolation behavior?

                                      As another example, adding a NOT NULL constraint on a large table will also block reads and writes in Postgres, while it validates that all current rows conform to the constraint. You’re better off creating an INVALID check constraint to ensure a column is non-null, and then VALIDATE’ing that constraint later to avoid downtime. These are the type of things where knowledge of just an abstraction layer and not of the underlying database will cause problems.

                                      To be fair, DHH does only mention that Basecamp 3 has no raw SQL in “application logic”, and he never mentions migrations in the post, so maybe he is ignoring migration-type SQL commands in this context.

                                      1. 3

                                        As another example, adding a NOT NULL constraint on a large table will also block reads and writes in Postgres, while it validates that all current rows conform to the constraint. You’re better off creating an INVALID check constraint to ensure a column is non-null, and then VALIDATE’ing that constraint later to avoid downtime.

                                        And this (among other things) is why I just can’t believe the claim that they could move from MySQL to Postgres and “not give a damn”.

                                        1. 1

                                          I interpreted that as he wouldn’t care what underlying technology he used not that the migration process would be trivial.

                                          1. 1

                                            But I’m not talking about the migration process either. He will care about the underlying technology when, for example, his team will have to tackle vacuum issues – long after the move has been done.

                                            1. 1

                                              But if your claim is that you do have to care about the technology then your problem is with the entire blog post, not just if he would give a damn about running postgresql.

                                      1. 1

                                        I would love to know the validity of this claim. It seems fishy that a patent was filed but no white paper was submitted to journal for peer review (that I can find). If anyone with more expertise can provide their take on the matter, I would greatly enjoy it!

                                        1. 19

                                          The inventor has a website called boundedfloatingpoint.com. There he describes it in a bit more detail than the article, but not much.

                                          Note carefully how he describes it:

                                          This invention provides a device that performs floating point operations while calculating and retaining a bound on floating point error.

                                          And “[t]his invention provides error notification by comparing the lost bits”.

                                          It’s a solution to the problem of “unreported errors”. His solution provides extra fields in the floating point representation to carry information about ’lost bits” and allows the operator to specify how many significant digits must be retained before an error is flagged.

                                          This is an advantage over the current technology that does not permit any control on the allowable error. This invention, not only permits the detection of loss of significant bits, but also allows the number of required retained significant digits to be specified.

                                          At a cursory glance one might be inclined to think he’s solved the problem of floating point, but the reality is he’s developed a standard for communicating error in floating-point operations that can be implemented in hardware.

                                          Not to detract from his solution, but it doesn’t seem like he’s invented anything that will surprise hardware designers.

                                          1. 7

                                            Thank you for that analysis. This is a real problem with floating point numbers, but hardly the only one.

                                            People who haven’t seen it might be interested in this post from last year about a new number representation called “posits”, which addresses some completely orthogonal issues with fixed-size number representations. :)

                                            1. 1

                                              Nice! Thanks for the link.

                                            2. 1

                                              It’s a solution to the problem of “unreported errors”. His solution provides extra fields in the floating point representation to carry information about ’lost bits” and allows the operator to specify how many significant digits must be retained before an error is flagged.

                                              SIGNAL ON LOSTDIGITS;
                                              NUMERIC DIGITS 10;
                                              NUMERIC FUZZ 2;
                                              

                                              We just need to do all our math in REXX.