The lady serving drinks is Saint Sofia, the city’s patron (of Sofia, Bulgaria, where I live). Sofia means “wisdom” (or “learning”) in Greek – that’s also why there is an owl chillin’ - in Greek mythology an owl is a symbol of wisdom. It was made custom by a Bulgarian artist. I like the drawing, partly because it is in my field-of-view while programming (which is a creative exercise) and being surrounded by art is nice (but can also understand if this is a bit much for other people :P)
I’m planning a third person RPG as a start. My wife takes charge in modeling on the major part and I will do the rest lol. I’m totally new into this field so not holding up a high expectation on it.
Attempting to port my language translation library to support ARM. Not going to be a fun weekend. https://github.com/kroketio/kotki
Working on https://github.com/kroketio/kotki - a C++ library (with Python bindings) that translates text from one language to another without using the cloud. happy with the result, as the API is now stable and everything seems to work OK. And, open-souce.
In the original Qt, the platform backends are in separate plugins. I have never understood the advantage of this, but it makes the deployment more complicated. LeanQt puts both the front and the backend in the same library. It is even easy to merge all parts of LeanQt into just one static or shared library.
Because sometimes it’s useful to support more than one in a single application. For example, on macOS you can ship and X11 or Quartz UI with different plugins (normally you’d use Quartz, but you could use X11 for remote display over SSH). I believe the key use case for this was to allow local and remote displays, where you’d use the X11/Wayland/Quartz/Win32 for local display and an HTTP / WebSocket / Canvas back end for remote display. This would let you run a Qt app on your desktop natively and also serve it for your mobile devices to use from a web browser.
I’m not sure the extent to which anyone has taken advantage of this, but it always makes me uncomfortable when people say ‘I don’t understand why this feature exists and so I’m going to remove it’.
Thanks. Last time I used Qt with the X11 binding on a Mac was twenty years ago. It’s still possible to compile an application with LeanQt so it works with X11 on Mac, as long as there is an xcb API on Mac (didn’t check). It would even be possible to statically link an application with both the xcb and cocoa integration to make them selectable at startup and still not using plugings for this. Concerning the use of desktop applications on mobile devices I’m sceptical; usually the look and feel is so different that you have to implement different GUIs for desktop and mobile, unless it’s a very simple application.
it always makes me uncomfortable when people say …
Well, it’s a good criterion for weeding out questionable features; my experience with Qt spans twenty years and a wide variety of application areas including CLI, server and embedded; if it’s not immediately obvious to me that it’s an important feature, I take the liberty of throwing it out. People can continue to use original Qt if someone absolutely needs the feature.
I’m sure there are reasons why such plugin system exists. In fact, every line of code in the Qt codebase ended up there for a reason and, thus, can be reasoned about.
What I will say, however, from a plain ol’ FOSS developer standpoint (me) that has the reasonable wish to distribute his plain ol’ QtWidgets program in a statically compiled manner - this is unreasonably difficult to achieve with Qt and consequently I have spent/wasted many hours in this area … partly because of things like the plugin system that complicate the build process a lot while such feature(s) (seemingly) gain me nothing.
I am excited about LeanQt for this reason; it aims to make Qt development easy for the majority while those with ‘exotic’ requirements can use ‘the real Qt’.
I’m sure there are reasons why such plugin system exists.
There was a time when people thought that a modular program must be composed of dynamic libraries, or even by components in different processes with marshalled calls; anything else would have been called monolythic; but fashion trends come and go, and with each generation of developers priorities change.
distribute his plain ol’ QtWidgets program in a statically compiled manner
That’s easy to do with LeanQt; but it’s also easy to put all Qt in a single shared library to appease LGPL if need be.
while such feature(s) (seemingly) gain me nothing.
So there are already at least two people who see it that way.
I usually bash my keyboard. The resulting filename could be, for example:
wegin4g0weg.png. After a while, my folder is full of such files.
When do I clean the folder up? When I bash my keyboard and then the rename fails because the file already exists.
These people are building their server side service in Swift:
ORM, JWT, ed25519 implementation, etc.
Hmm it seems somewhat weird to do these benchmarks with Flask+Sanic as Quart is the closest asyncio-powered spiritual successor to Flask.
Simple solution, get your project off of Github. You are using someone elses platform. Host your own gogs/gitea.
This is less work than it sounds, and the benefits are huge.
Simple solution, get your project off of Github.
This isn’t necessarily so simple. GitHub have successfully established themselves as a centralised, even the de facto default, Git hosting service, and project discovery is a lot easier on GitHub than other Git hosting services. I agree that the benefits are enormous to moving off of GitHub (and I myself have started hosting my personal projects on git.sr.ht where possible), but the reality is that the userbase on sr.ht is miniscule in comparison to GitHub, you can’t star repos, and you can’t follow other users to see their activity.
That’s not to mention things like GitHub Sponsors, which for some maintainers might be the sole reason they’re able to keep maintaining their repositories, and GitHub Actions, which lowers the barrier to entry for, and smoothens the experience of using, CI. The reality is that some maintainers might not have a choice not to use GitHub.
The other thing is that I think this comment, along with many others on this article, misses this point from the article (emphasis mine):
DigitalOcean seems to be aware that they have a spam problem. Their solution, per their FAQ, is to put the burden solely on the shoulders of maintainers.
To be clear, myself and my fellow maintainers did not ask for this. This is not an opt-in situation. If your open source project is public on GitHub, DigitalOcean will incentivize people to spam you. There is no consent involved.
While moving off of GitHub might alleviate the problem of spam PRs as a result of Hacktoberfest, it’s yet another solution that puts the burden on the maintainer to try to treat the symptoms, rather than addressing the root problem, which should be the responsibility of DigitalOcean.
I agree. There are multiple things that can happen here, all of them positive:
Github is the de facto standard, the userbase on is miniscule
I run a gitea+keycloak and got 100+ users within a few months. It hosts several projects. People actively sign up in order to contribute to my projects. This has worked great for us because the chances of low quality contributions and/or spam issues is non-existent. I compare it to “Slack/Gitter” vs “IRC” - where we prefer IRC due to the (perceived) learning curve/difficulty … “skin in the game” comes to mind. It weeds out beginners from high quality contributions.
That’s not to mention things like GitHub Sponsors, which for some maintainers might be the sole reason they’re able to keep maintaining their repositories
Sorry to say but developers chose to actively participate in a centralized ecosystem run by a mega corporation not because there are no alternatives but because they are lazy. GitHub Sponsors is not the only way to obtain funding.
which lowers the barrier to entry for, and smoothens the experience of using, CI
There are only perceived barriers. Drone+Gitea is not rocket science.
rather than addressing the root problem, which should be the responsibility of DigitalOcean.
IMO the root problem is that some FOSS developers don’t realize they’re locked into an ecosystem for no reason whatsoever. They don’t care. They think self-hosting takes a lot of time. They want their little Github stars. Name your excuse, yet complain when shenanigans like this happen. I am certain you will not agree with this post, but at least I have a peace of mind of being in control over my own community.
I want to preface this by saying that in no way am I (or have I been) defending GitHub and what their attempt at becoming a centralised service has done to the FOSS community. Instead I’m trying to be a realist and point out some of the reasons why project maintainers might find it difficult to move away from GitHub, counter to your assertion that it’s “less work than it sounds”.
I am certain you will not agree with this post
I wouldn’t be so certain if I were you; I agree with some things you’ve said. However…
not because there are no alternatives but because they are lazy
Drone+Gitea is not rocket science
They don’t care. They think self-hosting takes a lot of time. They want their little Github stars.
Just because, in your experience, these things have not been difficult for you does not mean they’re easy for everyone. That’s my main point. Alternative sources of funding may not be easy for everyone to access, everywhere in the world. Maintainers might not have the time or energy to set up or learn to use other CI services. Self-hosting may take a non-trivial amount of time for some people, and costs money.
All of this is not to mention that, for existing projects, moving their hosting to another service could be a major disruption.
I just want to add an additional point. I think that the following things are reasonable things to ask a project author to either accept (if choosing to host a project on GitHub) or reject (if choosing to host elsewhere):
However, I don’t think the following thing is (I don’t think it should have to “come as part of the package”, so to speak):
Funny enough that today I start work on a small WebRTC project and this comes up on Lobste.rs, ill make sure to give it a good read. Thanks for the share!
This is of great help to beginners looking into making QML applications with Qt5!
I have 3 nitpicks:
qmakein the examples which is deprecated. The Qt Company is moving towards CMake.
At work I’m doing a bunch of small~ish stuff this week. I need to split up some components in our long term storage solution for our metrics pipeline (M3DB). I also need to fix a couple of bugs in our logging pipeline and refresh a PR for filebeat that has been sitting there for a while due to my lack of time. Because of this, there are now a bunch of things to fix due to changes in the interfaces.
Personally I’m working on a small side-project to scratch my itch of totally removing Google Analytics from by blog. I’m replacing it with a small tool that I’m building that doesn’t require any script to be added into the web page. Currently I can track referers, geolocation and normal stats. On the plus side I’ve found out (thanks to this) that a lot of script kiddies really try to find a vulnerable wordpress/joomla setup on my blog 🤷♂️. Related to this also working on a small auth proxy for Grafana to integrate with Cloudflare Access.
Guessing you are parsing access logs :)
I also have made “my own Google Analytics” recently but took a different approach. On the page(s) in question I include a small JS snippet that will fire a XHR request to some endpoint where there is a Quart app running with a “catch all” route handler.
It’s a very simple program really, and prevents the use of Google Analytics. I can query my database and easily get, for example, a listing of most popular pages for my site in a certain time period.
Well … almost 🙂. My blog is statically hosted in Github pages with my personal domain, so no server access logs. What I’m doing is relying on an edge worker running on Cloudflare to collect the data, including referrers and geolocation. Visualization is handled by Grafana.
I used Pelican for this website, and it works well. Generates HTML fast enough, and has automatic reloading through inotify. I edit/create posts in VIM or a markdown editor.
Using Node + CMS + SEO stuff seems rather elaborate if you just want a simple blog - however I’m quite the minimalist.
Looks rather interesting, I’m going to have to try this one day. Absolutely despise the usual suspects (Ansible, Docker, Puppet, etc).
As someone who was recently asked to do a deep dive on Ansible, I’m curious, how would you describe the downsides of Ansible?
Sorry for the late, late response to this but my few complaints about Ansible:
All in all, Ansible isn’t bad, and it definitely has served my company well, but the level of complexity introduced is high and you’ll end up writing a bunch of wrapper scripts if you don’t want to remember the 10000 command line flags you need to run any moderately complex scenario.
Very interesting. I’m someone who observes ansible a bit from afar – I grew up in the fab + chef + libcloud era, but my DevOps team took over and switched to ansible + terraform, which are admittedly more solid tools for cloud automation. To me, the only real downside of ansible I could tell from studying it (aside from all the YAML-ese) is that its “push” model starts to slow down for big clusters and cloud footprints. But then I discovered mitogen for ansible and it seems like that’s actually becoming a solved problem, without the downsides of the pull model. In which case, it feels to me like ansible will stand the test of time due to ecosystem/network effects, but I could be wrong!
asyncio is an embarrassment and though it is the officially blessed library the truly Pythonic way is to ignore it and use trio
Here’s a post on some of the issues with asyncio: https://web.archive.org/web/20171206104600/https://veriny.tf/asyncio-a-dumpster-fire-of-bad-design/
And another one (by the eventual author of trio): https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/
I used to have some links to what made Trio great but seem to have lost them. The documentation is pretty good: https://trio.readthedocs.io/en/stable/design.html And in practice, I’ve spent unknown amounts of time debugging crazy asyncio issues because some task threw a
StopIteration wen it shouldn’t have and caused some other coro to fail silently or whatever the problem of the day is. I’ve used Trio less but these kinds of problems just don’t seem to happen.
I like Trio but the ecosystem around it seems in it’s infancy. I made a library for the Quart web framework that supports both the asyncio and Trio eventloop. This library communicates with Redis so I had to get an adapter for both.
aioredis is an easy pick, however for Trio it was hard to find something. I ended up copying some unmaintained toy project into my project as-is since I could not find a better alternative (which is working fine so far, but def. not an optimal situation). This effectively splits the community in half.
Using Trio for my professional work is a hard sell because of that reason, even though I reckon Trio is probably the better implementation.
Yes, most libraries are written for asyncio, which is kind of a shame.
However, trio-asyncio lets you use them from inside your trio app: https://trio-asyncio.readthedocs.io/en/latest/
It’s kind of a nightmare to port a large existing app from asyncio to trio, and probably isn’t worth it, but I think the existence of trio-asyncio means there are few reasons to start a new asyncio app.
asyncio is a worse-is-better API. It’s good enough to be productive despite its warts and that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity as most other better projects suffer when a worse-is-better API or project becomes popular.
asyncio is a worse-is-better API. It’s good enough to be productive despite its warts
I’m going to try not to sound too bitter, but asyncio is not even worse-is-better
It’s not simple, for either implementers or users. It’s not correct, given how it’s essentially impossible to cleanly handle cancellations and exceptions. And it’s not consistent, it’s a collection of layers that have slowly accreted, occasionally you have to drop down and real with the raw generators.
that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity
I would have agreed with you a few years ago, the things you’re saying are generically correct, but now we have Trio. Trio is simpler both in implementation and also in programming model, it just makes sense.
It also has a lovely compatibility layer: https://trio-asyncio.readthedocs.io/en/latest/
I’m going to try not to sound too bitter
It’s okay. I avoid asyncio Protocols and transports like the plague - they’re just too fragile for me to handle.
Thank you for the trio asyncio adapter. I wonder if I can integrate it into my sanic + uvloop + aioredis + asyncpg project
Awaiting bare coroutines in the simplest procedural case, using
asyncio.get_running_loop() to get the event loop reference and
loop.create_task(coroutine) to make Tasks to schedule in parallel for fan-out operations. At a previous job, I actually plugged an ProcessPoolExecutor in the
loop.run_in_executor as the default one and used it to do truly parallel fan-outs for a project that would index TomTom product data with an API key, determine the new releases and download in parallel the 7z archives for map shapes, unpack and repack into an S3 backed tar of shapefile tar gzs and index the offsets into a database so it was trivial to fetch exactly what country/state/city polygon on the fly for use in our data science and staged polygon updating. It was pretty nifty actually, considering that archive packing was cpu bound which necessitated process concurrency and my hack allowed me to develop single process first (using a ThreadPoolExecutor descendant class which just scheduled coroutines in parallel) and then go full multiprocess for the things that ate CPU.
One thing that surprised me was the “Don’t Access Attributes” example, where
findall() is faster than
re.findall(). The runtime could (easily?) cache this lookup if it’s in a tight loop (or tiny scopes in general).
Some bookkeeping required; i.e: if the object attribute changes; invalidate the cache. I don’t know how Python works in the background though so this optimization might not be worth the extra code.
At its core, CPython is a very simple stack-based virtual machine for Python bytecode. The main loop, for example, is literally just a
switch statement with one case per bytecode instruction.
There are some optimizations in there – the compiler can do some constant folding, the bytecode interpreter knows some pairs of instructions are likely to occur together and can handle them in ways that make your processor’s branch predictor happy, and so on – but not that many. As far as I’m aware, its simplicity is a deliberate choice.
There are alternative implementations of Python (like PyPy) which do much more advanced things and can offer corresponding performance gains, but generally people who are choosing Python are making a tradeoff of CPU time for developer time, and if they hit situations where they absolutely must get code to run faster, they’ll investigate either small, targeted optimization tricks (like aliasing a nonlocal name accessed repeatedly in a loop, or otherwise rewriting the code not to access a nonlocal from inside a loop) where available, or just rewrite performance-critical sections in C.
If you want more detail, I gave an intro talk at PyCon a couple years ago (video, slides since the A/V setup had to use a degraded version). The slide deck has pointers to some more in-depth materials.
If you’re familiar with Flask but want websockets I’d highly recommend FastAPI. I have a project that started off with Flask, but I really wanted a websocket for one silly, stupid thing. I wound up using it an excuse to learn FastAPI, and by extension Python’s asyncio ecosystem. FastAPI has been a joy to work with, the community is incredibly friendly and helpful and the documentation is great.
If you want websockets and you’re coming from Flask, use Quart instead. It is literally Flask but rewritten from scratch, supporting asyncio.
FastAPI is geared towards developing a REST API (which it is absolutely great at).
I was considering using FastAPI but I was worried about how new it is. Have you ran into any hurdles?
None at all, I’ve found the FastAPI documents to be a lot easier to understand than Flask, but I think a lot of that may have been from increased domain knowledge by the time I got around to working with FastAPI.
I like Zeal, simple Qt program for offline documentation. Usually faster than searching for the online equivalent.