Threads for dsc

  1. 2

    I like Zeal, simple Qt program for offline documentation. Usually faster than searching for the online equivalent.

      1. 5

        that mural looks really cool!

        1. 3

          Willing to share any backstory on the mural?

          1. 12

            The lady serving drinks is Saint Sofia, the city’s patron (of Sofia, Bulgaria, where I live). Sofia means “wisdom” (or “learning”) in Greek – that’s also why there is an owl chillin’ - in Greek mythology an owl is a symbol of wisdom. It was made custom by a Bulgarian artist. I like the drawing, partly because it is in my field-of-view while programming (which is a creative exercise) and being surrounded by art is nice (but can also understand if this is a bit much for other people :P)

            1. 2

              Beautiful, thank you.

        1. 7

          Making 3D games with Godot and Blender.

          1. 2

            What kind of games are you looking to make?

            1. 1

              I’m planning a third person RPG as a start. My wife takes charge in modeling on the major part and I will do the rest lol. I’m totally new into this field so not holding up a high expectation on it.

          1. 3

            Attempting to port my language translation library to support ARM. Not going to be a fun weekend. https://github.com/kroketio/kotki

            1. 2

              Working on https://github.com/kroketio/kotki - a C++ library (with Python bindings) that translates text from one language to another without using the cloud. happy with the result, as the API is now stable and everything seems to work OK. And, open-souce.

              1. 7

                In the original Qt, the platform backends are in separate plugins. I have never understood the advantage of this, but it makes the deployment more complicated. LeanQt puts both the front and the backend in the same library. It is even easy to merge all parts of LeanQt into just one static or shared library.

                Because sometimes it’s useful to support more than one in a single application. For example, on macOS you can ship and X11 or Quartz UI with different plugins (normally you’d use Quartz, but you could use X11 for remote display over SSH). I believe the key use case for this was to allow local and remote displays, where you’d use the X11/Wayland/Quartz/Win32 for local display and an HTTP / WebSocket / Canvas back end for remote display. This would let you run a Qt app on your desktop natively and also serve it for your mobile devices to use from a web browser.

                I’m not sure the extent to which anyone has taken advantage of this, but it always makes me uncomfortable when people say ‘I don’t understand why this feature exists and so I’m going to remove it’.

                1. 4

                  Thanks. Last time I used Qt with the X11 binding on a Mac was twenty years ago. It’s still possible to compile an application with LeanQt so it works with X11 on Mac, as long as there is an xcb API on Mac (didn’t check). It would even be possible to statically link an application with both the xcb and cocoa integration to make them selectable at startup and still not using plugings for this. Concerning the use of desktop applications on mobile devices I’m sceptical; usually the look and feel is so different that you have to implement different GUIs for desktop and mobile, unless it’s a very simple application.

                  it always makes me uncomfortable when people say …

                  Well, it’s a good criterion for weeding out questionable features; my experience with Qt spans twenty years and a wide variety of application areas including CLI, server and embedded; if it’s not immediately obvious to me that it’s an important feature, I take the liberty of throwing it out. People can continue to use original Qt if someone absolutely needs the feature.

                  1. 2

                    I’m sure there are reasons why such plugin system exists. In fact, every line of code in the Qt codebase ended up there for a reason and, thus, can be reasoned about.

                    What I will say, however, from a plain ol’ FOSS developer standpoint (me) that has the reasonable wish to distribute his plain ol’ QtWidgets program in a statically compiled manner - this is unreasonably difficult to achieve with Qt and consequently I have spent/wasted many hours in this area … partly because of things like the plugin system that complicate the build process a lot while such feature(s) (seemingly) gain me nothing.

                    I am excited about LeanQt for this reason; it aims to make Qt development easy for the majority while those with ‘exotic’ requirements can use ‘the real Qt’.

                    1. 2

                      I’m sure there are reasons why such plugin system exists.

                      There was a time when people thought that a modular program must be composed of dynamic libraries, or even by components in different processes with marshalled calls; anything else would have been called monolythic; but fashion trends come and go, and with each generation of developers priorities change.

                      distribute his plain ol’ QtWidgets program in a statically compiled manner

                      That’s easy to do with LeanQt; but it’s also easy to put all Qt in a single shared library to appease LGPL if need be.

                      while such feature(s) (seemingly) gain me nothing.

                      So there are already at least two people who see it that way.

                  1. 3

                    Defense hasn’t loaded yet. Did it load for you?

                    1. 24

                      I suspect that’s why it has the satire tag.

                      1. 11

                        It will not load if your faith in the future of single-page applications isn’t strong enough. ;)

                        1. 6

                          This article requires at least 8gb of available RAM to load.

                          1. 3

                            I was able to interact with the components of the page that had loaded, so that’s a plus.

                          1. 2

                            I usually bash my keyboard. The resulting filename could be, for example: wegin4g0weg.png. After a while, my folder is full of such files.

                            When do I clean the folder up? When I bash my keyboard and then the rename fails because the file already exists.

                            :-)

                            1. 2

                              These people are building their server side service in Swift:

                              https://github.com/katalysis-io

                              ORM, JWT, ed25519 implementation, etc.

                              1. 1

                                Hmm it seems somewhat weird to do these benchmarks with Flask+Sanic as Quart is the closest asyncio-powered spiritual successor to Flask.

                                1. 10

                                  Simple solution, get your project off of Github. You are using someone elses platform. Host your own gogs/gitea.

                                  • Not hosted by Microsoft
                                  • Faster Git operations (Github can get really slow at times)
                                  • Higher barrier of entry for contributors
                                  • Optionally hook it up to something like Keycloak

                                  This is less work than it sounds, and the benefits are huge.

                                  1. 9

                                    Simple solution, get your project off of Github.

                                    This isn’t necessarily so simple. GitHub have successfully established themselves as a centralised, even the de facto default, Git hosting service, and project discovery is a lot easier on GitHub than other Git hosting services. I agree that the benefits are enormous to moving off of GitHub (and I myself have started hosting my personal projects on git.sr.ht where possible), but the reality is that the userbase on sr.ht is miniscule in comparison to GitHub, you can’t star repos, and you can’t follow other users to see their activity.

                                    That’s not to mention things like GitHub Sponsors, which for some maintainers might be the sole reason they’re able to keep maintaining their repositories, and GitHub Actions, which lowers the barrier to entry for, and smoothens the experience of using, CI. The reality is that some maintainers might not have a choice not to use GitHub.

                                    The other thing is that I think this comment, along with many others on this article, misses this point from the article (emphasis mine):

                                    DigitalOcean seems to be aware that they have a spam problem. Their solution, per their FAQ, is to put the burden solely on the shoulders of maintainers.

                                    To be clear, myself and my fellow maintainers did not ask for this. This is not an opt-in situation. If your open source project is public on GitHub, DigitalOcean will incentivize people to spam you. There is no consent involved.

                                    While moving off of GitHub might alleviate the problem of spam PRs as a result of Hacktoberfest, it’s yet another solution that puts the burden on the maintainer to try to treat the symptoms, rather than addressing the root problem, which should be the responsibility of DigitalOcean.

                                    1. 4

                                      I agree. There are multiple things that can happen here, all of them positive:

                                      1. companies like Digital Ocean can be informed of the harm they’re doing. They can form relationships with the repos that are interested in this (for various reasons) and make this opt-in - and maybe provide help and tools to deal with the bad actors
                                      2. a critical mass of repos will make the step to move to another DVCS host, thus increasing diversity, and maybe pushing those hosts to add functionality that’s perceived to be lacking
                                      3. GitHub can provide better tools for dealing with low-effort “nontributions” (thanks @flaviusb for the coinage), such as rate-limiting bad actors, putting PRs in a “mod queue” to be dealt with asynchronously, and other stuff social media sites have dealt with for more than a decade now
                                      1. 5

                                        Github is the de facto standard, the userbase on is miniscule

                                        I run a gitea+keycloak and got 100+ users within a few months. It hosts several projects. People actively sign up in order to contribute to my projects. This has worked great for us because the chances of low quality contributions and/or spam issues is non-existent. I compare it to “Slack/Gitter” vs “IRC” - where we prefer IRC due to the (perceived) learning curve/difficulty … “skin in the game” comes to mind. It weeds out beginners from high quality contributions.

                                        That’s not to mention things like GitHub Sponsors, which for some maintainers might be the sole reason they’re able to keep maintaining their repositories

                                        Sorry to say but developers chose to actively participate in a centralized ecosystem run by a mega corporation not because there are no alternatives but because they are lazy. GitHub Sponsors is not the only way to obtain funding.

                                        which lowers the barrier to entry for, and smoothens the experience of using, CI

                                        There are only perceived barriers. Drone+Gitea is not rocket science.

                                        rather than addressing the root problem, which should be the responsibility of DigitalOcean.

                                        IMO the root problem is that some FOSS developers don’t realize they’re locked into an ecosystem for no reason whatsoever. They don’t care. They think self-hosting takes a lot of time. They want their little Github stars. Name your excuse, yet complain when shenanigans like this happen. I am certain you will not agree with this post, but at least I have a peace of mind of being in control over my own community.

                                        1. 7

                                          I want to preface this by saying that in no way am I (or have I been) defending GitHub and what their attempt at becoming a centralised service has done to the FOSS community. Instead I’m trying to be a realist and point out some of the reasons why project maintainers might find it difficult to move away from GitHub, counter to your assertion that it’s “less work than it sounds”.

                                          I am certain you will not agree with this post

                                          I wouldn’t be so certain if I were you; I agree with some things you’ve said. However…

                                          not because there are no alternatives but because they are lazy

                                          Drone+Gitea is not rocket science

                                          They don’t care. They think self-hosting takes a lot of time. They want their little Github stars.

                                          Just because, in your experience, these things have not been difficult for you does not mean they’re easy for everyone. That’s my main point. Alternative sources of funding may not be easy for everyone to access, everywhere in the world. Maintainers might not have the time or energy to set up or learn to use other CI services. Self-hosting may take a non-trivial amount of time for some people, and costs money.

                                          All of this is not to mention that, for existing projects, moving their hosting to another service could be a major disruption.

                                        2. 3

                                          I just want to add an additional point. I think that the following things are reasonable things to ask a project author to either accept (if choosing to host a project on GitHub) or reject (if choosing to host elsewhere):

                                          • Website run by Microsoft
                                          • Git repository hosted by someone else
                                          • Barrier to entry for contributions is low
                                          • etc.

                                          However, I don’t think the following thing is (I don’t think it should have to “come as part of the package”, so to speak):

                                          • An external organisation will encourage users to spam your repository with low-quality or spam contributions for a month every year
                                      1. 1

                                        Funny enough that today I start work on a small WebRTC project and this comes up on Lobste.rs, ill make sure to give it a good read. Thanks for the share!

                                        1. 3

                                          This is of great help to beginners looking into making QML applications with Qt5!

                                          I have 3 nitpicks:

                                          1. The intro/preface goes right into QML - to be expected from something called “qmlbook”, however the title is also “A Book about Qt5” and QtQuick (QML) is not the only way to create interfaces. A more trivial way of developing a Qt5 application is by using QtWidgets. Beginners might not be able to tell the difference between the two.
                                          2. The author uses qmake in the examples which is deprecated. The Qt Company is moving towards CMake.
                                          3. Qt6 is around the corner (releasing december 2020) and they have revamped QML significantly. I don’t believe it is backward compatible with Qt5 QML code (?). The book is about Qt5 so that’s fine I suppose.
                                          1. 1

                                            At work I’m doing a bunch of small~ish stuff this week. I need to split up some components in our long term storage solution for our metrics pipeline (M3DB). I also need to fix a couple of bugs in our logging pipeline and refresh a PR for filebeat that has been sitting there for a while due to my lack of time. Because of this, there are now a bunch of things to fix due to changes in the interfaces.

                                            Personally I’m working on a small side-project to scratch my itch of totally removing Google Analytics from by blog. I’m replacing it with a small tool that I’m building that doesn’t require any script to be added into the web page. Currently I can track referers, geolocation and normal stats. On the plus side I’ve found out (thanks to this) that a lot of script kiddies really try to find a vulnerable wordpress/joomla setup on my blog 🤷‍♂️. Related to this also working on a small auth proxy for Grafana to integrate with Cloudflare Access.

                                            1. 1

                                              Guessing you are parsing access logs :)

                                              I also have made “my own Google Analytics” recently but took a different approach. On the page(s) in question I include a small JS snippet that will fire a XHR request to some endpoint where there is a Quart app running with a “catch all” route handler.

                                              This way, I figured, ill only get the stats from visitors that have javascript enabled (read: real browsers). Every 5 min the data (IP, Referer, geo) is synced to some database.

                                              It’s a very simple program really, and prevents the use of Google Analytics. I can query my database and easily get, for example, a listing of most popular pages for my site in a certain time period.

                                              1. 1

                                                Well … almost 🙂. My blog is statically hosted in Github pages with my personal domain, so no server access logs. What I’m doing is relying on an edge worker running on Cloudflare to collect the data, including referrers and geolocation. Visualization is handled by Grafana.

                                            1. 1

                                              I used Pelican for this website, and it works well. Generates HTML fast enough, and has automatic reloading through inotify. I edit/create posts in VIM or a markdown editor.

                                              Using Node + CMS + SEO stuff seems rather elaborate if you just want a simple blog - however I’m quite the minimalist.

                                              1. 3

                                                Looks rather interesting, I’m going to have to try this one day. Absolutely despise the usual suspects (Ansible, Docker, Puppet, etc).

                                                1. 2

                                                  As someone who was recently asked to do a deep dive on Ansible, I’m curious, how would you describe the downsides of Ansible?

                                                  1. 2

                                                    Sorry for the late, late response to this but my few complaints about Ansible:

                                                    • It’s really hard to figure out how to organize your project when you are first starting
                                                    • The dynamic inventories tend to fail in very non-obvious fashions (for example, if you forget to make the script executable or if it’s missing one of its dependencies you’ll get some bizarre error because Ansible will try to include the content of the script as if it was the actual inventory)
                                                    • Managing dependencies between tasks is hard if you plan to use tags to restrict the tags that need to be run. Happy to give an example if you are curious
                                                    • Managing secrets is kind of a pain in the ass; haven’t tried Ansible Vault, but the documentation made it look harder than using SOPS

                                                    All in all, Ansible isn’t bad, and it definitely has served my company well, but the level of complexity introduced is high and you’ll end up writing a bunch of wrapper scripts if you don’t want to remember the 10000 command line flags you need to run any moderately complex scenario.

                                                    1. 1

                                                      Very interesting. I’m someone who observes ansible a bit from afar – I grew up in the fab + chef + libcloud era, but my DevOps team took over and switched to ansible + terraform, which are admittedly more solid tools for cloud automation. To me, the only real downside of ansible I could tell from studying it (aside from all the YAML-ese) is that its “push” model starts to slow down for big clusters and cloud footprints. But then I discovered mitogen for ansible and it seems like that’s actually becoming a solved problem, without the downsides of the pull model. In which case, it feels to me like ansible will stand the test of time due to ecosystem/network effects, but I could be wrong!

                                                1. 1

                                                  Holy shit. What is the Pythonic way?

                                                  1. 7

                                                    They’re all different use cases. There isn’t “one way to do it” since there are multiple “it”s.

                                                    1. 5

                                                      asyncio is an embarrassment and though it is the officially blessed library the truly Pythonic way is to ignore it and use trio

                                                      Here’s a post on some of the issues with asyncio: https://web.archive.org/web/20171206104600/https://veriny.tf/asyncio-a-dumpster-fire-of-bad-design/

                                                      And another one (by the eventual author of trio): https://vorpus.org/blog/some-thoughts-on-asynchronous-api-design-in-a-post-asyncawait-world/

                                                      I used to have some links to what made Trio great but seem to have lost them. The documentation is pretty good: https://trio.readthedocs.io/en/stable/design.html And in practice, I’ve spent unknown amounts of time debugging crazy asyncio issues because some task threw a StopIteration wen it shouldn’t have and caused some other coro to fail silently or whatever the problem of the day is. I’ve used Trio less but these kinds of problems just don’t seem to happen.

                                                      1. 4

                                                        I like Trio but the ecosystem around it seems in it’s infancy. I made a library for the Quart web framework that supports both the asyncio and Trio eventloop. This library communicates with Redis so I had to get an adapter for both.

                                                        For asyncio, aioredis is an easy pick, however for Trio it was hard to find something. I ended up copying some unmaintained toy project into my project as-is since I could not find a better alternative (which is working fine so far, but def. not an optimal situation). This effectively splits the community in half.

                                                        Using Trio for my professional work is a hard sell because of that reason, even though I reckon Trio is probably the better implementation.

                                                        1. 1

                                                          Yes, most libraries are written for asyncio, which is kind of a shame.

                                                          However, trio-asyncio lets you use them from inside your trio app: https://trio-asyncio.readthedocs.io/en/latest/

                                                          It’s kind of a nightmare to port a large existing app from asyncio to trio, and probably isn’t worth it, but I think the existence of trio-asyncio means there are few reasons to start a new asyncio app.

                                                        2. 1

                                                          asyncio is a worse-is-better API. It’s good enough to be productive despite its warts and that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity as most other better projects suffer when a worse-is-better API or project becomes popular.

                                                          1. 6

                                                            asyncio is a worse-is-better API. It’s good enough to be productive despite its warts

                                                            I’m going to try not to sound too bitter, but asyncio is not even worse-is-better

                                                            It’s not simple, for either implementers or users. It’s not correct, given how it’s essentially impossible to cleanly handle cancellations and exceptions. And it’s not consistent, it’s a collection of layers that have slowly accreted, occasionally you have to drop down and real with the raw generators.

                                                            that’s going to require any replacement to keep a substantial compatibility layer or else stay in relative obscurity

                                                            I would have agreed with you a few years ago, the things you’re saying are generically correct, but now we have Trio. Trio is simpler both in implementation and also in programming model, it just makes sense.

                                                            It also has a lovely compatibility layer: https://trio-asyncio.readthedocs.io/en/latest/

                                                            1. 2

                                                              I’m going to try not to sound too bitter

                                                              It’s okay. I avoid asyncio Protocols and transports like the plague - they’re just too fragile for me to handle.

                                                              Thank you for the trio asyncio adapter. I wonder if I can integrate it into my sanic + uvloop + aioredis + asyncpg project

                                                        3. 1

                                                          Awaiting bare coroutines in the simplest procedural case, using asyncio.get_running_loop() to get the event loop reference and loop.create_task(coroutine) to make Tasks to schedule in parallel for fan-out operations. At a previous job, I actually plugged an ProcessPoolExecutor in the loop.run_in_executor as the default one and used it to do truly parallel fan-outs for a project that would index TomTom product data with an API key, determine the new releases and download in parallel the 7z archives for map shapes, unpack and repack into an S3 backed tar of shapefile tar gzs and index the offsets into a database so it was trivial to fetch exactly what country/state/city polygon on the fly for use in our data science and staged polygon updating. It was pretty nifty actually, considering that archive packing was cpu bound which necessitated process concurrency and my hack allowed me to develop single process first (using a ThreadPoolExecutor descendant class which just scheduled coroutines in parallel) and then go full multiprocess for the things that ate CPU.

                                                        1. 3

                                                          One thing that surprised me was the “Don’t Access Attributes” example, where findall() is faster than re.findall(). The runtime could (easily?) cache this lookup if it’s in a tight loop (or tiny scopes in general).

                                                          Some bookkeeping required; i.e: if the object attribute changes; invalidate the cache. I don’t know how Python works in the background though so this optimization might not be worth the extra code.

                                                          1. 2

                                                            At its core, CPython is a very simple stack-based virtual machine for Python bytecode. The main loop, for example, is literally just a switch statement with one case per bytecode instruction.

                                                            There are some optimizations in there – the compiler can do some constant folding, the bytecode interpreter knows some pairs of instructions are likely to occur together and can handle them in ways that make your processor’s branch predictor happy, and so on – but not that many. As far as I’m aware, its simplicity is a deliberate choice.

                                                            There are alternative implementations of Python (like PyPy) which do much more advanced things and can offer corresponding performance gains, but generally people who are choosing Python are making a tradeoff of CPU time for developer time, and if they hit situations where they absolutely must get code to run faster, they’ll investigate either small, targeted optimization tricks (like aliasing a nonlocal name accessed repeatedly in a loop, or otherwise rewriting the code not to access a nonlocal from inside a loop) where available, or just rewrite performance-critical sections in C.

                                                            If you want more detail, I gave an intro talk at PyCon a couple years ago (video, slides since the A/V setup had to use a degraded version). The slide deck has pointers to some more in-depth materials.

                                                            1. 1

                                                              Thank you for providing extra info. I’ve added your video to my queue/playlist.

                                                          1. 2

                                                            If you’re familiar with Flask but want websockets I’d highly recommend FastAPI. I have a project that started off with Flask, but I really wanted a websocket for one silly, stupid thing. I wound up using it an excuse to learn FastAPI, and by extension Python’s asyncio ecosystem. FastAPI has been a joy to work with, the community is incredibly friendly and helpful and the documentation is great.

                                                            1. 1

                                                              If you want websockets and you’re coming from Flask, use Quart instead. It is literally Flask but rewritten from scratch, supporting asyncio.

                                                              FastAPI is geared towards developing a REST API (which it is absolutely great at).

                                                              1. 1

                                                                I see - so it would depend on if you want to use server side rendering or not?

                                                                1. 1

                                                                  Yeah, pretty much.

                                                              2. 1

                                                                I was considering using FastAPI but I was worried about how new it is. Have you ran into any hurdles?

                                                                1. 1

                                                                  None at all, I’ve found the FastAPI documents to be a lot easier to understand than Flask, but I think a lot of that may have been from increased domain knowledge by the time I got around to working with FastAPI.