IHP looks really interesting, and as someone who’s a little daunted by Haskell, the effort here put into making common web framework patterns easy is commendable. It definitely seems more approachable.
On the other hand, there’s absolutely no way I’m paying a monthly subscription to use a web framework. Or paying for a Business plan in order to use MySql instead of Postgres. I don’t understand who this is targeted at… which I guess just means I’m not the target audience?
You can read some of the thoughts behind this in the announcement post of the paid plan https://ihp.digitallyinduced.com/blog/6392ad84-e96a-46ce-9ab5-1d9523d48b23-announcing-the-new-ihp-developer-subscription-ihp-v0-14
You don’t seem to get things like MySQL support without paying a subscription? https://ihp.digitallyinduced.com/Pricing
Well, perhaps you’re not the target audience then? I don’t see any valid reason why someone would be willing to start a new IHP project with MySQL but not with PostgreSQL. It doesn’t seem like it would be the blocker after overcoming the major hurdle of using Haskell.
If there wasn’t any need for MySQL support, then why is MySQL support offered as a paid feature?
It’s cool that you’re okay with this approach and it’s cool that the creator has chosen to monetise it this way, they’re absolutely entitled to. For me this is an uncomfortable intersection of the awkwardness of monetising open source software, with which I sympathise, and the pervasiveness of subscription pricing models these days. One-off payment for a licence per project? Sure, and in fact I have used this model in the past for client work. But I am unused to thinking of an open source web framework as a thing I would pay a subscription for. Who knows, perhaps this will become more commonplace in future too.
Mysql is mostly requested by large enterprise companies trying to use IHP. They typically have a lot of money to spend on things, and we’re happy to take their money to keep developing IHP. As an individual developer you’re not the target audience.
Also I’m going to take a guess that you had to put much more effort into the Mysql integration since the Haskell ecosystem heavily favors Postgres.
Sure, and that makes sense. The announcement post mentions that the Pro plan is for individual developers, though — so they are the target audience?
I guess MySQL is offered as a paid option for those who have significant commitments in MySQL and don’t want to move?
Monetizing open source with subscription models has been a thing for a long time now–just look at Red Hat Enterprise Linux. Hell, look at Oracle JDK, which is basically just monetizing OpenJDK. Or now Akka. I really don’t get why people find it so awkward. Why should only closed-source software vendors get to offer software as a subscription?
Ah Akka is a pretty good comparison, interesting. I don’t use the JVM so didn’t realise it had switched to a similar model.
How do y’all think this will be in Elixir versus what Gleam is doing? I’m not very familiar with Gleam, but I did read some FAQ: https://gleam.run/frequently-asked-questions/#how-does-gleam-compare-to-elixir
Gleam has a very different approach (and a different type system) influenced by ML languages like OCaml, so I don’t see why the two can’t coexist. Being designed with types from the beginning is very different to gradual typing, too, so I imagine they’ll still have different patterns and feel different to use. I’m still keen for Gleam!
When I started using WebAssembly to solve this problem, it wasn’t well known or even popular outside the browser, but it is slowly becoming the way of shipping backend applications.
It is? This is news to me. Curious to know if others agree with this or can point to examples.
Congrats! Gleam is super cool and I’m glad to see it keep getting better.
Nitpick: under “Compilation” you want “emitted” rather than “omitted”.
Another Rusty alternative is Sonic: https://github.com/valeriansaliou/sonic
Confusing naming. There’s already a rust project called warp, but it’s a web framework.
Funny story: Warp (the terminal emulator) is actually listing warp (the web framework) as a dependency: https://github.com/warpdotdev/warp#open-source-dependencies
How else are you going to make a GUI? Surely you’re not suggesting something as silly as, say, using a GUI toolkit, the way they did in the stone age!
@fasterthanlime has a nice video on why would you ever want to use a web browser for a random program’s GUI.
I appreciate the effort, but you’ve highlighted another generational difference: using a video to express a point that could hopefully, succinctly be made in the space of a few paragraphs, without requiring people to look at your face for ten minutes :)
Not to mention OS/2 Warp!
I’m worried I’ll confuse it with the original warp lines controlled by the original punch cards.
And OS/2 Warp.
I tried using the Vapor web framework on Linux, it seems to have a decent following for something that targets a non-Apple environment. It, and Swift on Linux, felt too immature at the time though, with some flaws like terrible compiler error messages and very bad compilation times. This was probably two years ago now though so things may have changed.
One thing I’ve read over and over is “don’t store your files in a database”. I assume there are caveats, and times when this does make sense, but could anyone care to make the case for why this is a good or a bad idea in this particular scenario?
In general: Storing files in the database generates a ton of IO load on the database server, which is usually already IO-bound. If your database is busy doing other stuff (unlike, say, IMGZ which doesn’t do anything else) that’s going to degrade performance.
On the flip-side, it’s terribly operationally convenient when you can backup a single database and actually have all your data, and having consistency between the data-store & the filesystem store is nice so you can’t refer to files that have been deleted / failed to upload / whatever.
Generally, considerations for files are different than for other data. E.g. you almost never need to filter files by actual contents, or sum them, or do any of the other things databases are good at, you just want to be able to store them, retrieve them and delete them. If you saved everything in a database, it would be more expensive, just because of the type of guarantees that you need for data, which you don’t need for files.
That means you’d unnecessarily be paying all the costs that are associated with what we normally want to do for data, but not need any of the benefits.
Sqlite did an analysis on this and found that for files under a certain size, it is actually faster to read them from a database (based on the database page size): http://0x0.st/iFUc.png
It’s a bad idea in this scenario. But you’re not likely to care too much at very low scale.
One solid reason to care is that in PostgreSQL the maximum “row” size is 16KiB. If you go over that then “the next” 16KiB goes into what’s called a “TOAST” table, which has access characteristics similar to a singly linked list. (So, 1 request for a 64KiB row will cause 4 queries to the storage engine and so on.)
Other characteristics of the TOAST table is that it compresses on input, which can quickly saturate your CPU.
Another way to say that is, “it will work, until it doesn’t.” Which is true of 100% of scaling problems. :-)
The nice thing about simple solutions is that they can be easier to adapt and extend later when needed.
Scaling problems are for amateurs, if I get too popular for my architecture I’ll just disable the signup page.
But that’s one customer too late - who are you going to evict to get out of scaling problem territory?
You basically have three options with Postgres: you can put the file contents in the row, you can use the large object facility, or you can write the file to disk somewhere and store the path.
Putting the file contents in the row is simple, and is the only option that gives you for-free the notion that deleting a row will get rid of the content of the file. It has the disadvantages that others have discussed, although I don’t think TOAST is so bad for performance.
The large object facility is basically file descriptors in the database. You have to take extra care to remove them when you’re done with them, and most ORMs have poor support for it. I have never seen the large object facility used in the wild, and it’s not a tool I would reach for personally.
The third option is probably the best. The filesystem will always be better at storing files than your layer on top of the filesystem. But you have integrity concerns here (deleting a row does not cause the corresponding file to disappear), and you have to back it up separately from the database.
As someone who has played with writing both an interpreted language and one that compiles via LLVM, I definitely second these! It’s fun, it’s actually not as hard as you’d think, and it gives you a much deeper understanding of how programming languages work.
https://craftinginterpreters.com/ is linked in the article and is very, very good.
I didn’t work from it, but I have read some of https://interpreterbook.com/ and thought it was pretty good.
If I am allowed to mention something I’ve made that’s relevant without it seeming spammy, I use Exist, a personal analytics tool that allows a mood rating and short note each day alongside the core quantitative data like activity, productivity, and sleep metrics etc. It didn’t really start off as a journalling setup, and there’s a character limit, but I find it quite useful to have a journal of what I was doing (and how much of it) alongside how I felt then. On my most happy days I tend to write about special occasions that occurred, and seeing friends, which is probably not a groundbreaking discovery :)
I used to have a good experience with Komodo IDE but have not used for a few years now (ironically I switched to Pycharm). Might be worth a look.
Thanks, downloading it now. It seems a bit outdated though as the latest Python version it supports is Python 3.6.
I backed it, but I’ve been continually thinking about unbacking it. As much as I want to like what they’re doing, there’s a lot I’ve not been impressed with (e.g. poor release communication, their handling of the librem.one service); I’ve been on the fence. Even after reading this, I have no idea what year my device will ship in. It reads like they’re slipping deadlines again, but don’t want to come out and say that. We’ll see, I guess.
Same on both counts. I also had a poor experience with a Librem notebook that I ended up returning; that soured me on their products in general.
Ordered mine… looking forward to it.
I guess I’m way too literal a person. I read licences. I read words and expect them to mean something.
People have been trained if they see an “I Accept” button, you click it and carry on.
It causes me mental anguish every damn time.
I’m just not the sort of person who can blindly do that.
I loathe “I Accept” buttons.
I think a lot of people have a mindset “Purism is free software, it should be cheaper and higher spec’d hardware”,
They forget usually phones are heavily subsidized by the network providers so they can lock you in and load you up with shitware and spyware and strap you down with EULA’s.
Yes, the privacy part will be a very nice to have. I want that.
But not nearly as much as I don’t want the lock in and shitware and spyware and EULA’s.
Not nearly as much as I want to be able to tinker and improve and feed my improvements into the ecosystem.
Not nearly as much as I want the acceptance and expectation from the people I pay my money to that…
I AM gROOT!
Backed their crowdfunder back … when? 2017? I plan on redeeming one from one of the later batches, so the review will takes some time still. Until then, my Nexus 5 will do fine.
o7 to you who still uses the Nexus 5. I used mine until late 2017 when I picked up a Pixel. I’m on a Pixel 3 now and can’t imagine using a N5 still.
I’m still using mine. It, uh… works?
I mean, it’s a mobile device, so I don’t expect it to be pleasant, but once I got ublock origin installed it became pretty tolerable.
I pre-ordered one way back when they had the crowdfund. I find this tiered release rather confusing, TBH. But it’s good that they’re finally starting to ship!
I pre-ordered a few months ago. I wasn’t sure I’d use it enough to justify the cost (I seriously doubt it’ll cover everything I want in a daily driver device) but I decided it was worth it, because it’s something I want to see exist, so given I can afford it, I should support it. The Google/Apple mobile duopoly we currently have isn’t a great situation, so more competition (even in a very niche form) is welcome. I’m still sad about the Palm Pre, to be honest!
However, this shipping announcement really rankled. Another 6-10 months to get a phone with a case that fits? I appreciate that they’re offering to bump people down the list, and I’ll definitely take them up on it if needed, but it feels quite disingenuous to claim “we hit our deadline” with this sort of half-baked rollout. I’m considering asking for a refund and judging the final result before committing to it now.
I was moments away from putting down for one, but then I checked the specs on the modem and backed out. The set of supported LTE bands was spotty enough that I couldn’t see myself using this overseas or even on certain domestic carriers without constantly fighting reception issues.
I preordered one at the beginning of the year and just got an email from Librem with effectively the same information as this blog post, promising more info in a few weeks.
If you want an API for weather, I strongly suggest using Dark Sky: https://darksky.net/about
They’re not powered via ads, have a nice API, and appear to want to build something that lasts in a sustainable way instead of trying to profit.
For one thing, Dark Sky provides (almost) global coverage, which is useful for all of us folks who live outside the US (or have customers who do).
I use Dark Sky commercially and while it’s clearly more accurate in some places than others, it’s usually close enough. A few years ago I evaluated all the global weather APIs and it was the clear winner, I’m a happy customer.
DarkSky has a lot more points of data like apparent temperature, humidity, wind speed and direction, UV index, and likely a whole lot more. And the API seems like it’s laid out in a slightly more readable way. Weather.gov seems to follow a more HATEOAS methodology where it links out to different API endpoints that contain the data you’re looking for, whereas DarkSky just gives you the data all at once when you request a certain location.
Whooop. Almost posted a reply thinking you were asking about weather.com.
Truth be told I didn’t know weather.gov was a thing. DarkSky can apparently give accurate hyper-local results.
Thanks a lot for the link. I have such a need for correlating wireless link quality/speed and weather, and needed something that works in France. :)
This is great! I really like this style of explaining how the pieces fit together from the ground up. Every time I’ve tried to start a project in Phoenix (or Rails, or Django, …) in the past, I’ve been pretty overwhelmed by the sheer number of different moving parts that are simply scaffolded in.
Agreed, this was super well written and easy to follow. I love the idea that each section links to a commit. I might steal that for my blog :)
Thanks! I’ve got a whole pipeline that transforms a
git log into an article formatted to work with my static site generator. I’m a big fan of the format!
As someone else who recently generated a new scaffolded project with Phoenix (just to have a play around, I don’t know much elixir) and felt overwhelmed by all the moving parts, I wanted to chime in too and say I found this post very handy. Thanks :)
I recently experimented with reason and really liked it. One question, why not do what bucklescript does, but target go? (Basically fork bucklescript)
I would guess that way you get ocaml and reason at once. Not sure how much work that would be though. Being totally realistic, I feel like that approach would be more likely to succeed, though also be less fun to do.
That’s an interesting idea! Use bucklescript’s existing frontend and add a backend that targets Go. Could work.
One of the reasons I chose to write Braid in Go was so it wouldn’t have any dependencies outside Go (and Go-related tooling). It seems neater, given I have a hard requirement on the Go compiler (once I get to the stage where it is invoked automatically as part of compilation).
I wish there could be an OCaml like syntax for Braid, to make it more similar to languages like OCaml, SML, Haskell, Elm and PureScript.
I don’t, reason syntax is more natural for me, just from what I know before. The syntax means relatively little anyway.
I did lean more towards ocaml-like syntax initially, but thought something more c-like would be an easier sell for Go developers. (Maybe that’s not much of an argument though, as a viable language doesn’t actually need to be anything like the one it’s built on…)
It depends who your target audience is: if it is people who use Go and wish there were more functional concepts in their language of choice then it is most likely the most reasonable. If it is people coming from other functional languages, hoping to use the Go ecosystem and compiling into Go binaries, then an ML syntax would be more welcome.
I read the examples thinking, hey, this looks and works a lot like ocaml with different syntax — even the way it handles mutability is the same. And it turns out it is built in ocaml! Nice. I didn’t look further to see if it’s compiling to ocaml and then to wasm under the hood, but I wasn’t aware ocaml had a wasm target so perhaps it’s all just a coincidence.
Congrats on the new project. Given you are the creator of the social site sublevel, which seems to still be alive, what was the impetus for going on to make this new one also?
I guess I don’t know enough about CPU architecture as I can’t understand the reason behind this change. Could someone explain why Intel would want to increase so dramatically the number of cycles a pause takes? Is it meant to be an efficiency tradeoff that means fewer explicit pauses while waiting for locks?
Indeed a timing change like this is normally due to power efficiency constraints or targets. I’d conjecture that in their internal evaluation benchmarks, Intel decided that this allowed their cores to
more aggressively drop to a lower power state while seeing an acceptable performance loss (which is exactly what Intel’s docs say, shown in the article). It would seem that the .NET spinlock implementation was dependent on knowing the latency of pause instruction. I wouldn’t call this a hardware performance regression. It just looks like software didn’t support the hardware well yet, and soon there will be official support by MS. It’s still a well done exploration into the performance regression of that workload.
EDIT: as someone pointed out in the HN thread, the change in cache configuration in Skylake is another possible (and probably bigger) motivation for changing the pause latency. He points out that specifically a dirty read from another core’s L2 has increased latency compared to previous gen’s dirty hit to the inclusive L3. I’d assume a shared hit wouldn’t be that much better.
EDIT2: DVFS latencies are on the order of ms for Intel speed shift, orders of magnitude too large to be useful in this context. The “small power benefit” mentioned would just be the reduction in dynamic power from the reduction in spinning.
If you want to create an account and you don’t have an iOS device, they recommend a web app that requires a chrome extension called Alby. Putting aside the fact that this really limits the usefulness of a web app, this chrome extension appears to be for bitcoin payments…? (it does mention Nostr further down, but what?)
The developer is a really really really big Bitcoin enthusiast, and his personal page is just a Bitcoin company.