Good point brought up by this article. I too, would rather have a standard, really good migration tool to use on any project with a SQL database, than a different tool for Django vs Rails vs Ecto vs whatever else.
Regarding downgrade migrations, they are mostly valuable for when you are iterating on your migration code. If it’s easy to write your downgrade (or provided automatically) it is usually faster to undo your migration, modify it, then run it again, than to restore your database from a backup and try your migration again. Database transactions are great (and table stakes for a migration tool) but often I realize I missed something (e.g. misspelled a column) long after the transaction has completed and I’d prefer not to have migration that looks like “Step 1: create a a column. Step 2:… Step 6: rename that column because I spelled it wrong”.
Although, I would say, just as I prioritize convenience and don’t bother writing a downgrade migration unless it’s trivial, I also prioritize convenience and happily use my ORM models in migrations even though that comes with sharp edges (e.g. model method changes making your migration code now be wrong).
If I misspelled a column name in a migration that’s being developed and haven’t been deployed to stage or production, then I blatantly rename the field manually without any migrations and fix the typo in a migration. Call it “rebase migrations” ;)
I haven’t ever restored my dev DB from backup because of an error in a migration, it’s too long to wait.
We also don’t have model methods! Like, at all. Apart from repr, which I don’t count as a real method.
Company: PromptWorks
Company site: https://www.promptworks.com
Position(s):
Location: Philadelphia & New York for full time roles, remote US for contractors
Description: We are a development shop that focuses on software craftsmanship. Our calling is to help companies create amazing, intuitive web & mobile applications, APIs, products, and services.
Pair programming, continuous integration & delivery, kaizen, and TDD/BDD aren’t just ideas we pay lip service to, but core practices of our day-to-day work.
We love polyglots. We use lots of Ruby, Python, Elixir and JavaScript (mostly TypeScript, React and React Native).
Contact: https://www.promptworks.com/jobs, matt@ for engineers, mike@ for contractors, ben@ for PMs
The fact that current testing practices are considered “effective” is an indictment of the incredibly low standards of the software industry.
This one hits close to home. End-to-end testing in most environments I’ve worked in is by far the most valuable but also so very very hard. And so, all the good tooling is for the low hanging fruit: unit tests. Fortunately (and consequently) the industry is pushing all things towards more functional paradigms– because it’s the easiest to test.
I get what the author is trying to get at with calling it “serverless” and not sure if it’s a good or bad overloading of terms. But, I do think that SQLite is an underappreciated tool for the reasons they described. I wrote the following on Hacker News, but figured I’d add it here to:
I think a good under-appreciated use case for SQLite is as a build artifact of ETL processes/build processes/data pipelines. Seems like lot of people’s default, understandably, is to use JSON as the output and intermediate results, but if you use SQLite, you’d have all the benefits of SQL (indexes, joins, grouping, ordering, querying logic, and random access) and many of the benefits of JSON files (SQLite DBs are just files that are easy to copy, store, version, etc and don’t require a centralized service).
I’m not saying ALWAYS use SQLite for these cases, but in the right scenario it can simplify things significantly.
Another similar use case would be AI/ML models that require a bunch of data to operate (e.g. large random forests). If you store that data in Postgres, Mongo or Redis, it becomes hard to ship your model alongside with updated data sets. If you store the data in memory (e.g. if you just serialize your model after training it), it can be too large to fit in memory. SQLite (or other embedded database, like BerkleyDB) can give the best of both worlds– fast random access, low memory usage, and easy to ship.
I think SQLIte is great, and an amazing feat of engineering.
However, I really wish it would just check my types. If the database will happily write a string to my int column, and my language is dynamically typed… well, there’s only the fallible human left to ensure there’s no silent data corruption.
You can add check constraints using typeof, e.g. check(typeof(col) == 'INTEGER')
.
I agree static types are useful and important, but dynamic types are also useful for plenty of things, e.g. using SQLite with unclean data from external sources.
select typeof(col), count(*) from imported group by 1;
select col from imported where typeof(col) != 'INTEGER';
update imported set col = ... where typeof(col) != 'INTEGER';
It seems like the initial documentation might be older than the widespread usage of serveless as “no visible servers for you to manage”.
I think it would be a bit silly to choose this hill to die on, it is not like the older meaning of serveless has ever caught on in any way or form, nor there’s really a trend on building the sort of thing that could be called the sqlite kind of serveless.
The text on itself doesn’t mean that the author is choosing to die on this hill, though, maybe it’s just about clarifying a specific piece of documentation
This page is at least 12 years old, and remains largely unmodified since its creation, except for the second section added 2 years ago. See this archive of the page from 2007. No one is dying on any hill, it was just written long before the term was otherwise used.
“Serverless” here means literally what it says: the work is done in-process, not in a separate server. This is beyond a trend, it’s the way regular libraries work. ImageMagick is “serverless”. Berkeley DB is “serverless”. OpenGL is “serverless”. Get it?
The only reason the developer of SQLite calls this out is because most SQL databases are client-server, so someone familiar with MySQL or SQLServer might otherwise be confused.
(And may I add that I, personally, find the current meaning of “serverless” ridiculous. Just because you don’t have to configure or administer a server doesn’t mean there isn’t one. When I first came across this buzzword a few years ago, I thought the product was P2P until I dug into the docs. But then, a lot of buzzwords are ridiculous and we get used to them.)
I get what the author is trying to get at with calling it “serverless” and not sure if it’s a good or bad overloading of terms.
I’m sympathetic to this line of thinking, but in this case “serverless” is an utterly and completely lost cause. It’s beyond any hope of redemption. All use is fair play.
I think a good under-appreciated use case for SQLite is as a build artifact of ETL processes/build processes/data pipelines.
ha, I built pretty much exactly that at Etsy years ago. We had an ETL that transformed the output of hadoop jobs into sqlite files that could be queried from the site. It worked because without writers you don’t have any locking problems.
Perl 6 lets you write code in precisely the way that suits you best, at whatever happens to be your (team’s) current level of coding sophistication, and in whichever style you will later find most readable …and therefore easiest to maintain.
That very aspect is more likely to make your codebase very difficult to maintain unless you are able to maintain a high level of discipline.
That being said, I have a soft spot in my heart for Perl6 and tools that let you express code in the way that’s most natural for the problem at hand. When it comes to “use the best tool for the job”, Perl6 can very often be that tool.
Company: PromptWorks
Company site: https://www.promptworks.com/
Full Time Positions
Contract Positions
Location: Philadelphia. On-site most days at our center city office.
Description: We are a development shop that focuses on software craftsmanship. Our calling is to help companies create amazing, intuitive web & mobile applications, APIs, products, and services.
Pair programming, continuous integration & delivery, kaizen, and TDD/BDD aren’t just ideas we pay lip service to, but core practices of our day-to-day work.
We love polyglots. We use lots of Ruby, Python, Elixir and JavaScript (mostly React and React-Native).
Contact: mike@promptworks.com, but @nicholaides on Philly Dev Slack is better.
PromptWorks is hiring for development, design, project management roles in Philadelphia.
Most projects are in Python, Ruby, Elixir, React, Vue, and/or React Native.
All relevant details on our jobs page.
Top 5 perks our employees love:
PromptWorks is hiring for development, design, and business development roles in Philadelphia and NYC. We’re a small software consultancy, mostly located in Philly.
Most projects are in Python, Ruby, Elixir, React, and/or React Native.
All relevant details on our jobs page.
Top 5 perks our employees love:
I’ve got a few, I’m sure I’m wrong but it’s cathartic say it out loud.
Basically all software sucks, now get off my lawn!
Startups aren’t a good idea financially unless you’re a cofounder.
I agree, but here’s a counterpoint. If you work at a startup you’ll get opportunities to work with all sorts of new technologies and solve all sorts of problems. Most other places won’t give you that much exposure and education in such a quick fashion. You can leverage those new skills towards higher pay in the future.
Also, even if you’re a cofounder, the same tradeoffs are at play, but at a higher intensity. It’s still not worth it financially (i.e. expected value is less that what a normal wage would be), but the skills you learn will help you earn more in the future.
Did you consider using url-safe base64 encoding?
He covers that in his article. He had a somewhat narrow and niche use case. This approach makes sense in that light.
You’re right, I missed that point.
I feel some of those requirements are self-imposed with questionable real-world implications, like, is for users really that important to be able to read the URL?
Maybe in his case it truly is, but in general, I would always prefer to keep things simple and as “standard” as possible.
For this kind of application (or any API really) the URL is a key part of the user interface. From a UI perspective, this solution is much simpler - certainly much less confusing than “URL is a garbled mess for these tables and not for others”. And looking at the code, it’s much simpler than base64 encoding too.
Whip up an RFC and send it to IETF and then it’ll be “standard” too…
I had the same thought. Article doesn’t address it explicitly, but does say that one of their goals was to modify the encoded data as little as possible.