1. 14

Additional information:

http://kb.mozillazine.org/Password_Manager#Troubleshooting

https://support.mozilla.org/en-US/questions/1006123

What’s old is new; this information was originally stored in signons.txt - 9.5 years ago: https://bugzilla.mozilla.org/show_bug.cgi?id=288040

  1. 10

    What was slow? There’s not much information about what operations were slow or how much faster they are now.

    1. 6

      While it shouldn’t come as much of a surprise that a JSON text file is faster to work with than a SQLite database, I thought it was interesting from a historical perspective, especially for those that may have missed the everything-must-be-in-a-database craze of the late-90s/early-2000s.

      1. 9

        It’s pretty cool to see this back and forth between plain text and indexed file and back to plain text.

        I suspect that it has to do with CPU speed - at some point it was faster to use an indexed file - so database - and now, with much faster CPU speeds the overhead (both run time and code maintenance wise) of having a database is beaten by a plain text file.

        As a side note, I worked in neurophysiology for a while, and I got a little frustrated by otherwise great analysis software packages, that integrated themselves very tightly to a database backend, making it very hard to break off pieces and use them for new projects.

        1. 7

          It is also interesting to read jwz opinion about this.

          I never understood the fascination some developers have for databases.

          1. 5

            Henry Baker is of the opinion that the overzealous use of Relational databases set the industry back at least 10 years

            http://home.pipeline.com/~hbaker1/letters/CACM-RelationalDatabases.html

            1. 2

              Never read the description of the one of the problems so well written:

              Why were relational databases such a Procrustean bed? Because organizations, budgets, products, etc., are hierarchical; hierarchies require transitive closures for their “explosions”; and transitive closures cannot be expressed within the classical Codd model using only a finite number of joins (I wrote a paper in 1971 discussing this problem). Perhaps this sounds like 20-20 hindsight, but most manufacturing databases of the late 1960’s were of the “Bill of Materials” type, which today would be characterized as “object-oriented”. Parts “explosions” and budgets “explosions” were the norm, and these databases could easily handle the complexity of large amounts of CAD-equivalent data. These databases could also respond quickly to “real-time” requests for information, because the data was readily accessible through pointers and hash tables–without performing “joins”.

        2. 6

          The biggest surprise for me is that there’s enough data for this to matter at all. What is the password manager doing to require so much data and run so many queries?

          My guy instinct says SQLite is overkill for a browser password manager, but I wouldn’t expect speed to be the complaint.

          1. 2

            Same. With a database on the order of hundreds of rows, I’d expect sub-millisecond responses. Is it speed it speed of development?

          2. 1

            My understanding (and I’m happy to be corrected) is that SQLite is used for a bunch of internal data storage, including bookmarks and the like. That would make this a matter of optimizing from the general case, rather than throwing out relational technology.

            1. 1

              Why aren’t profiles tied to a version? I dont understand why it is so difficult to run multiple versions of firefox at the same time. It really feels like a bullshit singleton problem. And this problem effects the very people that will compare and find bugs between versions, it should be easy to do this.