I agree that they should but I don’t think it is necessary. For the last 6 years, Duck has been a constant feature in the front page of various hacker forums. I myself use it for non-critical and non-production analytics. Now with 1.0 stability promise, I think the prospect of using Duck for production environment is possible. I’ll wait for another year myself before that comes.
Just couple days ago I needed to verify correctness/consistency of some CSV files. I needed to cross-check them in different combinations which in terms of SQL meant JOIN. I remembered in past I came across couple tools which allow to query CSVs directly. However the first I happened to try this time was duckdb. After a small and mostly effortless learning curve with multiple trials and errors it worked flawlessly for the purpose. It seems it has a chance to become a constant and honorable addition to my toolbox.
Since there’s a million
*DBs and the article doesn’t explain it (that I found)I agree that they should but I don’t think it is necessary. For the last 6 years, Duck has been a constant feature in the front page of various hacker forums. I myself use it for non-critical and non-production analytics. Now with 1.0 stability promise, I think the prospect of using Duck for production environment is possible. I’ll wait for another year myself before that comes.
@zmitchell’s comment is the first time I’ve known what it is, so it was at least necessary for me!
Just couple days ago I needed to verify correctness/consistency of some CSV files. I needed to cross-check them in different combinations which in terms of SQL meant JOIN. I remembered in past I came across couple tools which allow to query CSVs directly. However the first I happened to try this time was duckdb. After a small and mostly effortless learning curve with multiple trials and errors it worked flawlessly for the purpose. It seems it has a chance to become a constant and honorable addition to my toolbox.
I’m using it in production since v0.9. Its great for implementing fast and flexible APIs on slow moving data.