I like some of Deno’s ideas but I don’t find this too convincing. For instance, I like capabilities and Deno running TS directly (apparently Node can now do this now too), but it’s not enough for me to bet on a new-ish technology against Node. Deno KV is cool, but I usually need a database and would probably use SQLite or Postgres instead.
I don’t want to get locked into Deno either. It’s steered by a for-profit company and backed by Sequoia. They’re going to want their money back eventually.
Yeah I certainly don’t intend this to be a “we should all use Deno” post. More of a “we should be incorporating all the good infra ideas from the last 2 decades”.
Then I don’t get the point of “cross-platform” and “native-like” in the subtitle. The screenshots look very much like what I see on my mac, but not anything what I see on my Windows PC.
Perhaps it would be better to say which platforms the app looks native-like. My expectation is at least two platforms look native-like, even if some of those platforms are harder to nail down what “native” means.
TBH, it’s not really native on Mac either. There’s all sorts of subtle details that are wrong, even if it tries harder than most to look like it fits in.
I think a lot of people conflate native as in no-Electron and native as in it uses the system toolkit so things like input works right.
Hi there! Author here. Would love to know any quirks so I’ll try to improve it! Of course it’s never going to be exactly native-like, I talked about it in the “Lowest common delimiter” section, but I think it’s still worth the effort when developing a cross-platform app using Qt.
I just downloaded it on Windows. Will give it a whirl; looks good!
One suggestion. I would never have known that I could use it on Windows from looking at the home page (https://www.get-notes.com/)! It is only because of your post that I knew to look beyond the home page above-the-fold screenshot and go to the Download tab.
If you put the platform logos (ChromeOS, Linux, macOS and Windows) you have on the Download tab somewhere above-the-fold on your home and pricing pages (the logos can be much smaller), I and others would know immediately that it is not macOS-only.
I hope to soon work on a frameless window on Windows and Linux, which will give a look more similar to native apps (at least on Windows). Have you tried Daino Notes? What are your thoughts?
We’ve had great success profiling our C++ project build on Windows with cpp-build-analyzer and vcperf. On Linux and mac you can use Clang’s time-trace flame graph. Of course it doesn’t save you from a slow death of a thousand cuts but does tell you where to apply the band-aid. I think we went from 17 minutes to 6 by splitting a couple of large headers to smaller ones.
I could never make include-what-you-use to do anything useful.
It’s worth remembering the problem that IWYU solves. It is intended for large projects where changing a header will trigger a big recompilation. For example, one of my most recent LLVM changes added a function to the clang target definition and this triggers a recompilation of 1,000 files. It’s really annoying when you change something in a file and it causes you to recompile dozens of files that don’t actually need that definition. LLVM used to be much more lax about includes and it was fairly common to get 500+ files recompiled as a result of changing something that only 20 of them actually cared about.
Most projects are much smaller. If you can do a clean build in under a minute on a typical dev machine, you do not have the problem that IWYU solves. In fact, you may have the opposite problem.
There’s a trade off in C/C++ includes. If every compilation unit includes the same set then you can compile this as a precompiled prefix header and speed up every file. If every file includes a minimal set then you will parse headers more than once, but fewer.
Unless you’re doing a lot of template and constexpr things, parsing include files is a tiny fraction of compile time. For C projects, it’s rarely even 1% of the total build time. You’re better off in C just including things and not worrying. In C++, the type checking for templates can take longer, but it!s rarely a large part of the build anymore. The extra delay for incremental builds is the real killer and that’s a problem only for large projects.
For most small C++ projects, I tend to just put components in headers and have a single compilation unit for the entire thing. Any change is a full recompile, but that’s still only a few seconds. Wasting time and effort to save myself a few seconds is not worth it.
Well, for smaller projects (that are usable in CLion) it does gray out the unused header. I don’t know if this is part of CLion or CLion is also shipping a version of iwyu.
Looks interesting! Might be a good idea to add some logic to the query tags: what if I want to browse Go-related jobs that do not belong to the “blockchain” category?
Will definitely add this, I also tend to avoid crypto companies. Exclusions can be good for tech/languages that people don’t want to work with as well.
I’m still working on a classifier to strip out those jobs, but it works best if you add a job title or keyword for the time being, i.e Software Engineer in UK.
Fair feedback though, thank you for taking a look.
Yours seemed potentially interesting, but the options locally (Hamburg) seem pretty limited. I wish you luck regardless!
I think your current location filter is searching for where a job is located (which is good), but it doesn’t seem to account very well for states or for companies hiring elsewhere in the country/continent/globe.
If I could wish for more filters in general (not necessarily for you, but for any job board):
Being able to filter where a job is located and where they are hiring would be nice. (justification: for visa reasons I can work in Germany for any German company, being hired by other countries is a more complex process)
Language requirement filter would help in cases like mine also.
Fulltime/parttime/contract/etc filters, maybe even a permanent or timed contract filter are also generally much appreciated.
I agree the local options are limited for now - I hope to improve this as I add more data. Adding “Remote” to the location box can be used in combination with locations - i.e Remote in Germany, but I should make that more clear. As you said, showing jobs near the selected location geographically could be useful too, right now it only uses what you select.
The filters you mentioned are a great idea too. I actually have plans to add some of these, but I’m still prototyping ways to detect these with either an LLM or classifier, although some can be detected with text-search.
After getting frustrated with existing job boards during my last job search, I set out to build a simple one that focuses on ease of use & privacy. This meant
No login or data collection (except for anonymous analytics)
Up-to-date data
Fast/easy browsing and filtering system
Save & export jobs to CSV
Although there’s still many features I want to add (mostly around filtering/searching), I wanted to share this here to see if it’s useful to others and gather feedback.
It seems like the main issue with ASAN here is that it’s built to work with malloc/new and palloc is throwing it off somehow. I’m curious how memleak.py manages to find the leak.
Small nit:
After this part, the code block already has the change you introduce, which threw me off a bit
The function PostgresMain() in src/backend/tcop/postgres.c has a giant loop that looks tempting.
It seems like the main issue with ASAN here is that it’s built to work with malloc/new and palloc is throwing it off somehow.
The thing is that palloc memory does get freed when the memory context ends (ending the memory context is a function call). So by the end of the program it looks ok no matter how much you palloc. But if you malloc without free, and you keep doing that, that memory will never be freed and valgrind and lsan will catch that.
I simplified in the post a bit. You can manually intervene with gdb or by adding new code to trigger leak checks (for valgrind or lsan) before the end of the program. I just didn’t love that idea.
I’m curious how memleak.py manages to find the leak.
I recall seeing some LLVM arena allocators that call ASAN APIs to help it see into the arena’s sub-allocations, which are invisible to ASAN’s malloc hooks. They poison and unpoison memory ranges to help detect UAFs, for instance.
On the face of it, it sounds weaker than precisely tracking the allocations, but I haven’t investigated.
The thing is that palloc memory does get freed when the memory context ends (ending the memory context is a function call). So by the end of the program it looks ok no matter how much you palloc. But if you malloc without free, and you keep doing that, that memory will never be freed and valgrind and lsan will catch that.
Makes sense!
The comment at the top seems straightforward. It tracks allocations that haven’t been freed within a time interval.
Ah, this is pretty handy. Thanks for your reply and explanations.
It’s not staging, it’s cleaning!
Needed to happen anyway, this was a good kick to get a superficial start. In retrospect I wish I’d taken a “before” picture (I have nothing to hide!), but you’ll just have to take my word that it doesn’t look like this 95% of the year (see baskets).
Desk
This is honestly not super ideal, I got this desk as a hand-me-down because my SO needed the working space of the larger, prettier desk (see below). I don’t love the rounded corners and posts make it easy for things to fall off, and the wicker cabinet doors are practically useless for both my brain and my space (out of sight, out of reach, out of mind). But we all make do sometimes[1].
In 2023 I was using this desk that my SO stripped, stained, cleaned the brass– it’s gorgeous, and I’m shocked I don’t have any photos from when I was using it. I could take one of it now, but it’s not “mine” atm– do have an older photo from when it was first finished (the flecks have aged out with a few more conditionings), but it isn’t even from like after I settled into it: 2019
no screenshots because it’s just default KDE, i used to make rainmeter plugins but that time is past.
1: That, and I bought a PRS I can’t actually afford. I’m looking for work, but my SO assures me we won’t need to let go of it :p
On a related note, I’m currently holding my maternal Grandfather’s ~1966 Bavarian Framus Star-Bass 5/150, the kind that would have been toured by the Rolling Stones IIRC, which (having orig. came in Sunburst Rot and been spray-painted by a prior owner) he stripped, carved, and painted. It’s got a nasty split across the back (I’m talking horizontal), and I hope to have funds someday to see it brought back to life. A set of cleats and extra-light strings should do it, but the extent of the damage (and thin-ness of the wood) is going to require professional attention, and as mentioned, I’m in increasingly dire straits and won’t be able to see to it anytime soon.
my level of attachment to my current desktop stack is so low that i’ve just lived with a.) an incorrect system time, and b.) this for several weeks
entirely my fault that it’s like this xD
and sometimes I tell people, “this is the kind of problem i wouldn’t be having if i didn’t know how to fix it”; i’ve just had other things on my mind :p
It’s nice to see my two main interests (programming and guitars I’m not technically able to afford) converge on lobsters! That bass is beautiful and I hope things change such that you’re able to get it restored soon.
I think your current desk is actually really great aesthetics-wise; too bad it’s not practical for you. I have also come to appreciate doors like that to keep dust from accumulating.
Yep @antlers mentions that in another comment. I’m currently building a sofle and looking forward to using it. It will be interesting to see how it compares to the Moonlander and the Ergodox that I currently use.
but unfortunately the result is unlikely to be performant enough to use, due to everything going through Qt’s custom touch stack
Is there something about how touch processing gets offloaded in iOS? Or did the author simply mean “it’s unlikely to be as polished”? What specifically would prevent a custom implementation from being as performant here as the native one?
In this section, I’m talking about the possibility of implementing a QML ListView yourself using other QML components like the MouseArea. This is rather inefficient because you’d need to listen to lots of events to determine if i.e the user is swiping up/down. When you write this logic in QML, it’s executed on the Qt JS engine, which is not very efficient.
One can also implement QML components in C++ and make them accessible to QML, but I believe the trouble here is making them feel native and also making them efficient. It shouldn’t be impossible to make touch processing as efficient as native code, but there are likely many optimizations Android/iOS has that you’d have to implement yourself. Plus, on Android, if your code is written in C++, you have overhead due to JNI calls.
You drive yourself insane by worrying about hitting the exact sweet spot between image quality and file size. In reality it doesn’t matter at all excepting very unusual circumstances.
Take this blog post, in which the author complains about the quality of the sample images. I cannot tell the differences between them and I was looking. Are most people going to study the images of the average blog post carefully? Not at all, especially if they are browsing on a small 6 inch device. The differences are going to be imperceptible and all you are doing is wasting your time caring about it.
In theory you could serve higher fidelity images to browsers with large HiDPI screens but in practice that it usually not worth the effort either.
People aren’t just using WebP for the average blog post. They’re writing CDNs that recompress files as WebP, damaging them in the way the author describes.
I think my point still stands - the damage is unnoticeable for most people using most devices.
Perhaps I would get upset if my expensive product shot didn’t look exactly perfect on my showcase website but the degradation of recompressing the image is well down of the list of things that will actually cost me customers, and the reduction of bandwidth may be a good tradeoff.
I cannot tell the differences between them and I was looking.
The Guardian uses WebP massively for instance, and I can tell very easily. The photos are butchered. Sure, it loads very fast, and sure, most of their articles use photos as an illustration only, and for news material, it doesn’t really matter, but quality (or lack thereof) is very clear to me.
Very much yes that you can spin indefinitely looking for a balance; for this person the answer is probably closer to “make all the files big” than “make all the files small”. They land roughly there at the end with the suggestion of JPEG q90 for artists showing their work off. They care and the bytes don’t cost too much even if most don’t notice the issues it avoids. (Fancy HTML and CSS could serve different versions for different devices, if needed.)
I worked on adding automatic resizing and JPEG compression to an email workflow, and I’m certainly not an image person but I’d also try some examples and run into this or that “problem” file where the settings that work for most other stuff have issues with this one. Definitely the sort of problem you can spin on.
Incidentally, JXL’s reference encoder had an interesting idea: target an amount of perceptual distance from the original rather than an amount of quantization. In theory, that could lead to something kind of like automatically boosting your JPEG quality for problem images. I don’t know enough to judge how well it worked out, but neat concept.
Incidentally, JXL’s reference encoder had an interesting idea: target an amount of perceptual distance from the original rather than an amount of quantization. In theory, that could lead to something kind of like automatically boosting your JPEG quality for problem images. I don’t know enough to judge how well it worked out, but neat concept.
I believe that according to testing done by one of the authors of JXL and some other people who work at the same company as him, JXL has the most narrow range of subjective quality scores for any given quality parameter, compared to other codecs. That is to say, if you want your images to all be at a certain quality or higher, it is easy to find a libjxl quality parameter that will give you that level of quality in all images, without being too wasteful with some images being much larger than they needed to be.
I couldn’t tell the difference either, being on mobile, but then I looked closer. The most noticeable part is the banding of the background. Webp seems to more readily produce a “semicircle of noise” for lack of a better term.
There is some work on “JPEG restoration” to try to hide artifacts on the receiving side–Knusperli tries to address artifacts at block edges (maybe there are cheaper or better alternatives if banding is your problem, though?), and JXL’s EPF or AVIF’s CDEF can attack the ‘ringing’ around contrast-y features.
(One thing I don’t know is whether clever encoders like mozjpeg or guetzli would interact poorly with a nonstandard decoder; I suspect things would work out mostly alright, because the same basic priors about how JPEG noise looks should still apply?)
Anyhow, in a marginal case like this, I wonder if decoder cleverness could push the subtle banding in the q85 JPEG over the ‘not really a problem’ line.
That’s called “posterization”. I was looking for it, but I couldn’t see it on desktop. Maybe my browser was smoothing it out automatically or something.
I have a couple of approved-but-unmerged pull requests languishing for a number of weeks. It’s already slow enough on the merging front… especially if you are a sole maintainer (normally a maintainer would do a review, but without another, there’s no one to verify it).
It’d be a win for security but a lot of smaller packages would, never get any updates merged I fear.
It doesn’t help that the tooling used is Github of all things, which basically only has toy workflows. It doesn’t and can’t answer the most important question for reviewers: What should I review right now?
This makes it very easy for things to drop off anyone’s radar. The stupid structure of review vs. merging also means that an unnecessarily large number of people and artificial delays are involved.
A lot of more experienced people in nixpkgs already understand that nixpkgs needs something like Gerrit, but I see no path to get there other than an eventual fork.
I also don’t like the pull request model… an otherwise ‘good enough’ patch gets mired in nits & other things that prevent it moving forward. As a maintainer, you should accept the 80%-done patch & handle last bit the one that pitched in. I’ve had some drive-by comments/request-for-changes that when I responded didn’t get a pong reply which stalled anything for weeks until I had to go search around for someone else gracious enough to finish that conversation.
But the project leaders have instead been working with Microsoft GitHub & getting further locked into the ecosystem (Actions, maintainer lists, GitHub IDs as identifiers)
I mean fundamentally what made nixpkgs work is that it is accessible for the casual person that want to work on something. Moving out of github at this point would destroy its most important value proposition.
I am not saying i disagree with you on the problems the GitHub limits create. But moving out of GitHub into something else at this point would be a net negative IMHO.
I’m not involved in nixpkgs but the quantity of PRs seems very high, any idea how this is managed for other distros? Just need more maintainers? Or possibly more automation here?
It looks like there are maybe 2000 opened in the past ~6 months that are still open [1].
Other distros have much lower throughput and some trusted maintainers of specific packages. On the extreme end of that, think of flyctl - they get a new version once or twice a week, each week that results in a new PR, sometimes with some minor build changes. A typical distro updates that twice a year, if they even ship it. Nixpkgs needs a weekly action from someone.
Offer was accepted on the land we put a bid on!! 🎉🎉🎉
This week will be spent on due diligence. Soil test for sewage being the primary one, and maybe talking to a builder to get an idea of costs for terraforming for eventual foundation and driveway to make sure we know what the costs are, plus talking to gas company about the cost of running a line off the main to our potential new property.
And don’t worry fellow nerds I already confirmed a local ISP offers gigabit fiber to the property 😁
On the opposite side, my SO found out yesterday she has a kidney stone and we spent our entire Sunday in ER and stuff dealing with that. Bittersweet couple of days.
Work… I dunno probably something? Integrating new apps to our system. Whatever.
That’s an exciting thing to contemplate. If it’s out in the boonies, you might consider when or how much the power goes off. Then you get into the rathole of alternative power sources, which I had to do for my cabin in the woods. (1) manual started generator, pain, plus fuel source. (2) auto start generator with nat gas or propane is the low maintenance way to go. (3) Then solar, do you get battery backup, can you get it at your place, etc.
And water. Whole new things to learn about and obsess over.
It’s actually closer to town than our current place so definitely not in the boonies, and there is a large development going in nearby (NOT in our backyard) so we will be totally fine with utilities within a short time period though all sound like they are already solid. Our water is through the county as well.
And yep lots of new stuff to learn on top of the homeownership I’m already used to! Very stoked.
I looked for a plot of land a few years ago and getting fast wired internet wasn’t an option where I was looking. Satellite internet could overcome this so that wasn’t a deal killer for me.
Being out in the country means you are not connected to a sewage system, so you are on your own with a septic system. I’m not crazy about that, but I could manage that.
What ultimately killed the search for me was having to rely on a well for water. There’s no guarantee the attempt to dig a well will succeed, there’s no guarantee it won’t dry up, and the water you do get may be nasty.
On wells, I don’t know much about it, but it all really depends on what rocks you are drilling into and where the water is coming from. Local knowledge is king here, and there are unlikely to be more knowledgeable people around than experienced drillers and local/state authorities (ie, the people who do the paperwork required for the well). Anywhere you are going to drill a well is likely to have a bunch of other wells within a couple dozen km, albeit with varying depths and ages that may confound things.
The way to think about groundwater is basically as a layer-cake of different wet sponges, potentially separated by layers of plastic or some other impermeable material. You poke a hole down through the layers, and water leaks out and pools at the bottom, then you stick a straw down it and suck it up. These days the straw (aka well casing) is always(?) cemented to the sides of the layers it’s going through, so you only get water from the bottom layer.
So really you need to know your reservoir (the sponge-like rock you want to suck your water out of), and your aquifer (the actual water in it, where it’s coming from, where it’s going, etc). So to have a well succeed, you need to know where to drill to hit the reservoir and know that there’s actually water in it and where it comes from. In many areas with simple layer-cake geology this is pretty easy, but in others it’s trickier. For example many places in the US northeast is less a layer-cake of sponges and more a random jumble of different-sized impermeable blocks, with sponges filling in the gap. Drill into the middle of one of the blocks and stop there, and there’s no way water can get into your well from the surrounding area. Drill a few hundred meters over and hit one of the cracks between the blocks, and you can have all the water you want. This is the sort of stuff that local drillers will know better than anyone. The annoying times are when you think you know what you’re going to be drilling into, but there’s some weird unpredictable local wibble that screws you up in just this one spot. A change in the structure of the rocks, a change in the permeability of the reservoir, etc.
…okay after re-reading what I just wrote and the fact that I started it off with “I don’t know much about it”, which is still very true… uh yeah, your decision to just shrug and walk away is pretty reasonable.
If they’re “good” then they know a fair amount of the local geology/contextual info just by osmosis; watching lots of wells being drilled in different places and internalizing what works and what doesn’t. If not they’re self-deceiving at best. If they had any real ability to sense what was below the surface, oil companies would be hiring them for millions of dollars per job.
Agreed on all counts! We are looking into the sewage vs septic system currently (it’s a contingency on the offer!). This property has no wells it’s all on county water so that’s covered and I have already confirmed high speed and not just satellite which was another dealbreaker for me. I’m feeling pretty lucky so far to be honest!
I live in a village in Bavaria, and the taps in the house have to be connected to the official water line. Nobody wants to mess with the drinking water here.
The water there comes from the Wasserzweckverband’s multiple wells all over the county, and in Germany you can drink tap water, it is the most checked thing ever and has normally no chlorine, and never fluorine in it.
That said, the cost for water grew quickly and the fees for waste water treatment even more so, as it is billed proportional to your water consumption. Bad, if you need water for the garden… you pay waste water treatment for that, too.
Ground water is very deep down, the house is on a clay high bank, and no one wants to drill in clay.
So we tried a 8.000 l rainwater tank to water the +1.100 m² garden, along with second hand electric pump with inverter, pressure vessel, water gauge, electricity gauge, timer, water filter etc., all from the local kleinanzeigen (kijiji, classifieds, craigslist).
After that year we knew that we needed 65 kWh for the pump, and we did use 61.000 litres of rain water.
=> no well needed for the garden
=> no white crust on the clay&loamy soil in hot summers, no salination
=> tomatoes and cucumbers grow and taste so much better with rain water
Perhaps you ask someone who is familiar with your climate zone on how big catchment area and tank have to be.
Second hand and DIY is king to get close to the break even point.
Also in any irrigation system you will get leakage from stupid things like too cheap hose clamps. or black HDPE pipes which get hot and soft in the sun and slip off the hose nipples. I found a tool for doing wire clamps on YT, reclamped everything in stainless steel wire, and now we need less water and the neighbours basement is more dry than before.
As the municipality starts to check for cisterns here to collect fees (proposing that you’re cheating, and flush the toilets and run the washing machine with rain water) we now tried a Trockentrenntoilette (OMG, german agglutinated words… it is a dry composting toilet with urine separation) which is surprisingly good. No odours whatsoever (there is a tiny electric fan sucking the stink out of the bowl), no blocked waste water pipes, and you have to empty it every 5-7 days, but it is less smelly than a cat’s litter box (really no odour) - which is because the urine is caught separately.
Again saved on water.
And in this hot year we needed 46.000 litres of garden water only, because we started mulching the garden beds. Mulching is well worth the effort, you save more time and money on watering and tilling.
Again saved on water.
You will sure need a reliable source of water. But with slight changes to your customs you might not need that much water, and certainly not the expensive one from the utility pipe for everything.
I’ve been on one call where a maintenance window got backed out specifically because it dragged out too long and we decided to throw in the towel rather than risk fatigue-related mistakes.
This is one of the downsides of not having scheduled deploys and maintenance windows…you lose the ability to say “Hey, look, we have a deploy tonight, don’t get too fucked up at the bar. Hey, look, in two weeks we have a big migration, please make sure you’re well-rested ahead of time.”
This is also why I’m a huge fan of runbooks for ops; you basically frontload the executive functioning/decisionmaking to a known good time, and then you have a greater margin of error at runtime–the dumbest code I’ve ever written was when I was too tired to pop up a level and reorient, but energetic enough (the borderline mania of the sleep-deprived) to keep trying different stuff.
(It’d be a bit interesting to also see the answers for high, stoned, tripping, or what have you–I’ve known folks to do hot work while under the influence of all kinds of interesting substances.)
the dumbest code I’ve ever written was when I was too tired to pop up a level and reorient, but energetic enough (the borderline mania of the sleep-deprived) to keep trying different stuff.
Oh god yes, I’ve been there many times. You spend hours tweaking and tuning and trying different things, often changing multiple things at the same time (which is almost never wise). And then the next day after a proper rest you need 5 minutes and come up with the “obvious” fix.
I usually like this because it makes the code more readable and understandable, but you often have someone come along and “optimize” it, or reject it because it could be ‘optimized’ further.
After ChatGPT is sufficiently discussed online, and the next iteration contains these discussions, you will also be able to do a few prompts in succession like:
I like some of Deno’s ideas but I don’t find this too convincing. For instance, I like capabilities and Deno running TS directly (apparently Node can now do this now too), but it’s not enough for me to bet on a new-ish technology against Node. Deno KV is cool, but I usually need a database and would probably use SQLite or Postgres instead.
I don’t want to get locked into Deno either. It’s steered by a for-profit company and backed by Sequoia. They’re going to want their money back eventually.
Yeah I certainly don’t intend this to be a “we should all use Deno” post. More of a “we should be incorporating all the good infra ideas from the last 2 decades”.
That’s fair. I’m happy if Deno paves the way for these things to be implemented in other ecosystems.
Thanks for sharing your experience!
From the lead-in:
I was really hoping to see the final results of all that effort … what does it look like on Windows? Is it still native-like?
Microsoft doesn’t really make native applications for Windows anymore, so I’m not sure if it matters there.
Then I don’t get the point of “cross-platform” and “native-like” in the subtitle. The screenshots look very much like what I see on my mac, but not anything what I see on my Windows PC.
Perhaps it would be better to say which platforms the app looks native-like. My expectation is at least two platforms look native-like, even if some of those platforms are harder to nail down what “native” means.
TBH, it’s not really native on Mac either. There’s all sorts of subtle details that are wrong, even if it tries harder than most to look like it fits in.
I think a lot of people conflate native as in no-Electron and native as in it uses the system toolkit so things like input works right.
Hi there! Author here. Would love to know any quirks so I’ll try to improve it! Of course it’s never going to be exactly native-like, I talked about it in the “Lowest common delimiter” section, but I think it’s still worth the effort when developing a cross-platform app using Qt.
I just downloaded it on Windows. Will give it a whirl; looks good!
One suggestion. I would never have known that I could use it on Windows from looking at the home page (https://www.get-notes.com/)! It is only because of your post that I knew to look beyond the home page above-the-fold screenshot and go to the Download tab.
If you put the platform logos (ChromeOS, Linux, macOS and Windows) you have on the Download tab somewhere above-the-fold on your home and pricing pages (the logos can be much smaller), I and others would know immediately that it is not macOS-only.
That’s a very good suggestion. I appreciate that, thanks!
Hi, author here.
I hope to soon work on a frameless window on Windows and Linux, which will give a look more similar to native apps (at least on Windows). Have you tried Daino Notes? What are your thoughts?
Frameless window in Qt is a landmine, but I recommend this library: https://github.com/stdware/qwindowkit
Yep, I plan to use this one!
Are there any good alternatives?
We’ve had great success profiling our C++ project build on Windows with cpp-build-analyzer and vcperf. On Linux and mac you can use Clang’s time-trace flame graph. Of course it doesn’t save you from a slow death of a thousand cuts but does tell you where to apply the band-aid. I think we went from 17 minutes to 6 by splitting a couple of large headers to smaller ones.
I could never make include-what-you-use to do anything useful.
Can’t think of one. Maybe modules in a decade or two.
I guess the alternative is to suffer unduly included headers.
That’s what we ended up doing
It’s worth remembering the problem that IWYU solves. It is intended for large projects where changing a header will trigger a big recompilation. For example, one of my most recent LLVM changes added a function to the clang target definition and this triggers a recompilation of 1,000 files. It’s really annoying when you change something in a file and it causes you to recompile dozens of files that don’t actually need that definition. LLVM used to be much more lax about includes and it was fairly common to get 500+ files recompiled as a result of changing something that only 20 of them actually cared about.
Most projects are much smaller. If you can do a clean build in under a minute on a typical dev machine, you do not have the problem that IWYU solves. In fact, you may have the opposite problem.
There’s a trade off in C/C++ includes. If every compilation unit includes the same set then you can compile this as a precompiled prefix header and speed up every file. If every file includes a minimal set then you will parse headers more than once, but fewer.
Unless you’re doing a lot of template and constexpr things, parsing include files is a tiny fraction of compile time. For C projects, it’s rarely even 1% of the total build time. You’re better off in C just including things and not worrying. In C++, the type checking for templates can take longer, but it!s rarely a large part of the build anymore. The extra delay for incremental builds is the real killer and that’s a problem only for large projects.
For most small C++ projects, I tend to just put components in headers and have a single compilation unit for the entire thing. Any change is a full recompile, but that’s still only a few seconds. Wasting time and effort to save myself a few seconds is not worth it.
Well, for smaller projects (that are usable in CLion) it does gray out the unused header. I don’t know if this is part of CLion or CLion is also shipping a version of iwyu.
the zig programming language
Looks interesting! Might be a good idea to add some logic to the query tags: what if I want to browse Go-related jobs that do not belong to the “blockchain” category?
Will definitely add this, I also tend to avoid crypto companies. Exclusions can be good for tech/languages that people don’t want to work with as well.
The title says tech-focused, but first N entries for UK are bartending jobs.
I’m still working on a classifier to strip out those jobs, but it works best if you add a job title or keyword for the time being, i.e Software Engineer in UK.
Fair feedback though, thank you for taking a look.
Hey, if it worked for jwz…
Not to discourage you, but there’s too many job boards these days.
Yours seemed potentially interesting, but the options locally (Hamburg) seem pretty limited. I wish you luck regardless!
I think your current location filter is searching for where a job is located (which is good), but it doesn’t seem to account very well for states or for companies hiring elsewhere in the country/continent/globe.
If I could wish for more filters in general (not necessarily for you, but for any job board):
Thanks for the feedback!
I agree the local options are limited for now - I hope to improve this as I add more data. Adding “Remote” to the location box can be used in combination with locations - i.e Remote in Germany, but I should make that more clear. As you said, showing jobs near the selected location geographically could be useful too, right now it only uses what you select.
The filters you mentioned are a great idea too. I actually have plans to add some of these, but I’m still prototyping ways to detect these with either an LLM or classifier, although some can be detected with text-search.
All of these are good, A filter to remove positions with unlisted compensation ranges would be good, too.
From whence comes the data on these jobs? Do companies post jobs to algojobs specifically, or are you scraping other sites?
At the moment, all content is scraped from companies’ career pages. Right now, I’m tracking ~12k companies, but hope to add more in the future.
How do you compile the company database? This is important for the user to know.
How does the scrapper work? Do you use machine learning to extract the job description from the webpages?
After getting frustrated with existing job boards during my last job search, I set out to build a simple one that focuses on ease of use & privacy. This meant
Although there’s still many features I want to add (mostly around filtering/searching), I wanted to share this here to see if it’s useful to others and gather feedback.
Thanks!
It seems like the main issue with ASAN here is that it’s built to work with malloc/new and palloc is throwing it off somehow. I’m curious how memleak.py manages to find the leak.
Small nit: After this part, the code block already has the change you introduce, which threw me off a bit
The thing is that palloc memory does get freed when the memory context ends (ending the memory context is a function call). So by the end of the program it looks ok no matter how much you palloc. But if you malloc without free, and you keep doing that, that memory will never be freed and valgrind and lsan will catch that.
I simplified in the post a bit. You can manually intervene with gdb or by adding new code to trigger leak checks (for valgrind or lsan) before the end of the program. I just didn’t love that idea.
https://github.com/iovisor/bcc/blob/master/tools/memleak.py
The comment at the top seems straightforward. It tracks allocations that haven’t been freed within a time interval.
This may produce false positives but did also identify my bugs (fake in this post and the original production issue).
Whoops, thank you! Will fix.
I recall seeing some LLVM arena allocators that call ASAN APIs to help it see into the arena’s sub-allocations, which are invisible to ASAN’s malloc hooks. They poison and unpoison memory ranges to help detect UAFs, for instance.
On the face of it, it sounds weaker than precisely tracking the allocations, but I haven’t investigated.
Makes sense!
Ah, this is pretty handy. Thanks for your reply and explanations.
It’s not staging, it’s cleaning!
Needed to happen anyway, this was a good kick to get a superficial start. In retrospect I wish I’d taken a “before” picture (I have nothing to hide!), but you’ll just have to take my word that it doesn’t look like this 95% of the year (see baskets).
Desk
This is honestly not super ideal, I got this desk as a hand-me-down because my SO needed the working space of the larger, prettier desk (see below). I don’t love the rounded corners and posts make it easy for things to fall off, and the wicker cabinet doors are practically useless for both my brain and my space (out of sight, out of reach, out of mind). But we all make do sometimes[1].
For example, my desk used to be vry sml.
2022
In 2023 I was using this desk that my SO stripped, stained, cleaned the brass– it’s gorgeous, and I’m shocked I don’t have any photos from when I was using it. I could take one of it now, but it’s not “mine” atm– do have an older photo from when it was first finished (the flecks have aged out with a few more conditionings), but it isn’t even from like after I settled into it:
2019
no screenshots because it’s just default KDE, i used to make rainmeter plugins but that time is past.
1: That, and I bought a PRS I can’t actually afford. I’m looking for work, but my SO assures me we won’t need to let go of it :p
On a related note, I’m currently holding my maternal Grandfather’s ~1966 Bavarian Framus Star-Bass 5/150, the kind that would have been toured by the Rolling Stones IIRC, which (having orig. came in Sunburst Rot and been spray-painted by a prior owner) he stripped, carved, and painted. It’s got a nasty split across the back (I’m talking horizontal), and I hope to have funds someday to see it brought back to life. A set of cleats and extra-light strings should do it, but the extent of the damage (and thin-ness of the wood) is going to require professional attention, and as mentioned, I’m in increasingly dire straits and won’t be able to see to it anytime soon.
Happy to see a KDE default wallpaper ;)
my level of attachment to my current desktop stack is so low that i’ve just lived with a.) an incorrect system time, and b.) this for several weeks
entirely my fault that it’s like this xD
and sometimes I tell people, “this is the kind of problem i wouldn’t be having if i didn’t know how to fix it”; i’ve just had other things on my mind :p
Haha, well the default KDE wallpapers are quite good, so there’s less reason to change them anyways!
It’s nice to see my two main interests (programming and guitars I’m not technically able to afford) converge on lobsters! That bass is beautiful and I hope things change such that you’re able to get it restored soon.
I think your current desk is actually really great aesthetics-wise; too bad it’s not practical for you. I have also come to appreciate doors like that to keep dust from accumulating.
Is that a Sofle keyboard? What’s it like to use?
Looks like a lily58. I have a sofle and it’s great, btw. I havent felt the need to shop for a new keyboard in a couple years now :-)
Yep @antlers mentions that in another comment. I’m currently building a sofle and looking forward to using it. It will be interesting to see how it compares to the Moonlander and the Ergodox that I currently use.
I’m curious about this comment:
Is there something about how touch processing gets offloaded in iOS? Or did the author simply mean “it’s unlikely to be as polished”? What specifically would prevent a custom implementation from being as performant here as the native one?
In this section, I’m talking about the possibility of implementing a QML ListView yourself using other QML components like the MouseArea. This is rather inefficient because you’d need to listen to lots of events to determine if i.e the user is swiping up/down. When you write this logic in QML, it’s executed on the Qt JS engine, which is not very efficient.
One can also implement QML components in C++ and make them accessible to QML, but I believe the trouble here is making them feel native and also making them efficient. It shouldn’t be impossible to make touch processing as efficient as native code, but there are likely many optimizations Android/iOS has that you’d have to implement yourself. Plus, on Android, if your code is written in C++, you have overhead due to JNI calls.
Good read. You can also take a look at Kotlin Multiplatform with Compose Multiplatform (JetBrains has a multiplatform fork).
https://www.jetbrains.com/lp/compose-multiplatform/
Thanks! I’ll definitely give it a try. Curious to see how it performs on iOS, although it seems to still be in alpha there.
I am currently using it one of my projects that I have published it on Play Store and App Store in case you want to check it out.
There are occasional performance issues. But not consistently and it’s mostly smooth
https://github.com/msasikanth/twine
Nice! I tried on iOS and it’s quite responsive/performant. The list view also feels very natural.
You drive yourself insane by worrying about hitting the exact sweet spot between image quality and file size. In reality it doesn’t matter at all excepting very unusual circumstances.
Take this blog post, in which the author complains about the quality of the sample images. I cannot tell the differences between them and I was looking. Are most people going to study the images of the average blog post carefully? Not at all, especially if they are browsing on a small 6 inch device. The differences are going to be imperceptible and all you are doing is wasting your time caring about it.
In theory you could serve higher fidelity images to browsers with large HiDPI screens but in practice that it usually not worth the effort either.
People aren’t just using WebP for the average blog post. They’re writing CDNs that recompress files as WebP, damaging them in the way the author describes.
I think my point still stands - the damage is unnoticeable for most people using most devices.
Perhaps I would get upset if my expensive product shot didn’t look exactly perfect on my showcase website but the degradation of recompressing the image is well down of the list of things that will actually cost me customers, and the reduction of bandwidth may be a good tradeoff.
The Guardian uses WebP massively for instance, and I can tell very easily. The photos are butchered. Sure, it loads very fast, and sure, most of their articles use photos as an illustration only, and for news material, it doesn’t really matter, but quality (or lack thereof) is very clear to me.
Very much yes that you can spin indefinitely looking for a balance; for this person the answer is probably closer to “make all the files big” than “make all the files small”. They land roughly there at the end with the suggestion of JPEG q90 for artists showing their work off. They care and the bytes don’t cost too much even if most don’t notice the issues it avoids. (Fancy HTML and CSS could serve different versions for different devices, if needed.)
I worked on adding automatic resizing and JPEG compression to an email workflow, and I’m certainly not an image person but I’d also try some examples and run into this or that “problem” file where the settings that work for most other stuff have issues with this one. Definitely the sort of problem you can spin on.
Incidentally, JXL’s reference encoder had an interesting idea: target an amount of perceptual distance from the original rather than an amount of quantization. In theory, that could lead to something kind of like automatically boosting your JPEG quality for problem images. I don’t know enough to judge how well it worked out, but neat concept.
I believe that according to testing done by one of the authors of JXL and some other people who work at the same company as him, JXL has the most narrow range of subjective quality scores for any given quality parameter, compared to other codecs. That is to say, if you want your images to all be at a certain quality or higher, it is easy to find a libjxl quality parameter that will give you that level of quality in all images, without being too wasteful with some images being much larger than they needed to be.
I couldn’t tell the difference either, being on mobile, but then I looked closer. The most noticeable part is the banding of the background. Webp seems to more readily produce a “semicircle of noise” for lack of a better term.
There is some work on “JPEG restoration” to try to hide artifacts on the receiving side–Knusperli tries to address artifacts at block edges (maybe there are cheaper or better alternatives if banding is your problem, though?), and JXL’s EPF or AVIF’s CDEF can attack the ‘ringing’ around contrast-y features.
(One thing I don’t know is whether clever encoders like mozjpeg or guetzli would interact poorly with a nonstandard decoder; I suspect things would work out mostly alright, because the same basic priors about how JPEG noise looks should still apply?)
Anyhow, in a marginal case like this, I wonder if decoder cleverness could push the subtle banding in the q85 JPEG over the ‘not really a problem’ line.
That’s called “posterization”. I was looking for it, but I couldn’t see it on desktop. Maybe my browser was smoothing it out automatically or something.
I have a couple of approved-but-unmerged pull requests languishing for a number of weeks. It’s already slow enough on the merging front… especially if you are a sole maintainer (normally a maintainer would do a review, but without another, there’s no one to verify it).
It’d be a win for security but a lot of smaller packages would, never get any updates merged I fear.
It doesn’t help that the tooling used is Github of all things, which basically only has toy workflows. It doesn’t and can’t answer the most important question for reviewers: What should I review right now?
This makes it very easy for things to drop off anyone’s radar. The stupid structure of review vs. merging also means that an unnecessarily large number of people and artificial delays are involved.
A lot of more experienced people in nixpkgs already understand that nixpkgs needs something like Gerrit, but I see no path to get there other than an eventual fork.
I also don’t like the pull request model… an otherwise ‘good enough’ patch gets mired in nits & other things that prevent it moving forward. As a maintainer, you should accept the 80%-done patch & handle last bit the one that pitched in. I’ve had some drive-by comments/request-for-changes that when I responded didn’t get a pong reply which stalled anything for weeks until I had to go search around for someone else gracious enough to finish that conversation.
But the project leaders have instead been working with Microsoft GitHub & getting further locked into the ecosystem (Actions, maintainer lists, GitHub IDs as identifiers)
I mean fundamentally what made nixpkgs work is that it is accessible for the casual person that want to work on something. Moving out of github at this point would destroy its most important value proposition.
I am not saying i disagree with you on the problems the GitHub limits create. But moving out of GitHub into something else at this point would be a net negative IMHO.
Anything else just adds too much friction.
Btw, hello Tazjin, long time no see
I’m not involved in nixpkgs but the quantity of PRs seems very high, any idea how this is managed for other distros? Just need more maintainers? Or possibly more automation here?
It looks like there are maybe 2000 opened in the past ~6 months that are still open [1].
[1] https://github.com/NixOS/nixpkgs/pulls
Other distros have much lower throughput and some trusted maintainers of specific packages. On the extreme end of that, think of flyctl - they get a new version once or twice a week, each week that results in a new PR, sometimes with some minor build changes. A typical distro updates that twice a year, if they even ship it. Nixpkgs needs a weekly action from someone.
Offer was accepted on the land we put a bid on!! 🎉🎉🎉
This week will be spent on due diligence. Soil test for sewage being the primary one, and maybe talking to a builder to get an idea of costs for terraforming for eventual foundation and driveway to make sure we know what the costs are, plus talking to gas company about the cost of running a line off the main to our potential new property.
And don’t worry fellow nerds I already confirmed a local ISP offers gigabit fiber to the property 😁
On the opposite side, my SO found out yesterday she has a kidney stone and we spent our entire Sunday in ER and stuff dealing with that. Bittersweet couple of days.
Work… I dunno probably something? Integrating new apps to our system. Whatever.
Congratulations! I hope the searches all go well. Though if it needs terraforming, I see why you didn’t specify what it was in the south-east of!
That’s an exciting thing to contemplate. If it’s out in the boonies, you might consider when or how much the power goes off. Then you get into the rathole of alternative power sources, which I had to do for my cabin in the woods. (1) manual started generator, pain, plus fuel source. (2) auto start generator with nat gas or propane is the low maintenance way to go. (3) Then solar, do you get battery backup, can you get it at your place, etc.
And water. Whole new things to learn about and obsess over.
It’s actually closer to town than our current place so definitely not in the boonies, and there is a large development going in nearby (NOT in our backyard) so we will be totally fine with utilities within a short time period though all sound like they are already solid. Our water is through the county as well.
And yep lots of new stuff to learn on top of the homeownership I’m already used to! Very stoked.
I looked for a plot of land a few years ago and getting fast wired internet wasn’t an option where I was looking. Satellite internet could overcome this so that wasn’t a deal killer for me.
Being out in the country means you are not connected to a sewage system, so you are on your own with a septic system. I’m not crazy about that, but I could manage that.
What ultimately killed the search for me was having to rely on a well for water. There’s no guarantee the attempt to dig a well will succeed, there’s no guarantee it won’t dry up, and the water you do get may be nasty.
On wells, I don’t know much about it, but it all really depends on what rocks you are drilling into and where the water is coming from. Local knowledge is king here, and there are unlikely to be more knowledgeable people around than experienced drillers and local/state authorities (ie, the people who do the paperwork required for the well). Anywhere you are going to drill a well is likely to have a bunch of other wells within a couple dozen km, albeit with varying depths and ages that may confound things.
The way to think about groundwater is basically as a layer-cake of different wet sponges, potentially separated by layers of plastic or some other impermeable material. You poke a hole down through the layers, and water leaks out and pools at the bottom, then you stick a straw down it and suck it up. These days the straw (aka well casing) is always(?) cemented to the sides of the layers it’s going through, so you only get water from the bottom layer.
So really you need to know your reservoir (the sponge-like rock you want to suck your water out of), and your aquifer (the actual water in it, where it’s coming from, where it’s going, etc). So to have a well succeed, you need to know where to drill to hit the reservoir and know that there’s actually water in it and where it comes from. In many areas with simple layer-cake geology this is pretty easy, but in others it’s trickier. For example many places in the US northeast is less a layer-cake of sponges and more a random jumble of different-sized impermeable blocks, with sponges filling in the gap. Drill into the middle of one of the blocks and stop there, and there’s no way water can get into your well from the surrounding area. Drill a few hundred meters over and hit one of the cracks between the blocks, and you can have all the water you want. This is the sort of stuff that local drillers will know better than anyone. The annoying times are when you think you know what you’re going to be drilling into, but there’s some weird unpredictable local wibble that screws you up in just this one spot. A change in the structure of the rocks, a change in the permeability of the reservoir, etc.
…okay after re-reading what I just wrote and the fact that I started it off with “I don’t know much about it”, which is still very true… uh yeah, your decision to just shrug and walk away is pretty reasonable.
Or you can hire a professional “dowser” but they may just be lucky.
If they’re “good” then they know a fair amount of the local geology/contextual info just by osmosis; watching lots of wells being drilled in different places and internalizing what works and what doesn’t. If not they’re self-deceiving at best. If they had any real ability to sense what was below the surface, oil companies would be hiring them for millions of dollars per job.
Agreed on all counts! We are looking into the sewage vs septic system currently (it’s a contingency on the offer!). This property has no wells it’s all on county water so that’s covered and I have already confirmed high speed and not just satellite which was another dealbreaker for me. I’m feeling pretty lucky so far to be honest!
I live in a village in Bavaria, and the taps in the house have to be connected to the official water line. Nobody wants to mess with the drinking water here. The water there comes from the Wasserzweckverband’s multiple wells all over the county, and in Germany you can drink tap water, it is the most checked thing ever and has normally no chlorine, and never fluorine in it.
That said, the cost for water grew quickly and the fees for waste water treatment even more so, as it is billed proportional to your water consumption. Bad, if you need water for the garden… you pay waste water treatment for that, too. Ground water is very deep down, the house is on a clay high bank, and no one wants to drill in clay.
So we tried a 8.000 l rainwater tank to water the +1.100 m² garden, along with second hand electric pump with inverter, pressure vessel, water gauge, electricity gauge, timer, water filter etc., all from the local kleinanzeigen (kijiji, classifieds, craigslist). After that year we knew that we needed 65 kWh for the pump, and we did use 61.000 litres of rain water. => no well needed for the garden => no white crust on the clay&loamy soil in hot summers, no salination => tomatoes and cucumbers grow and taste so much better with rain water
Perhaps you ask someone who is familiar with your climate zone on how big catchment area and tank have to be. Second hand and DIY is king to get close to the break even point.
Also in any irrigation system you will get leakage from stupid things like too cheap hose clamps. or black HDPE pipes which get hot and soft in the sun and slip off the hose nipples. I found a tool for doing wire clamps on YT, reclamped everything in stainless steel wire, and now we need less water and the neighbours basement is more dry than before.
As the municipality starts to check for cisterns here to collect fees (proposing that you’re cheating, and flush the toilets and run the washing machine with rain water) we now tried a Trockentrenntoilette (OMG, german agglutinated words… it is a dry composting toilet with urine separation) which is surprisingly good. No odours whatsoever (there is a tiny electric fan sucking the stink out of the bowl), no blocked waste water pipes, and you have to empty it every 5-7 days, but it is less smelly than a cat’s litter box (really no odour) - which is because the urine is caught separately.
Again saved on water.
And in this hot year we needed 46.000 litres of garden water only, because we started mulching the garden beds. Mulching is well worth the effort, you save more time and money on watering and tilling.
Again saved on water.
You will sure need a reliable source of water. But with slight changes to your customs you might not need that much water, and certainly not the expensive one from the utility pipe for everything.
But they still are. Everything here still relies on context of the language, business problem, codebase, team, and company you’re working with.
Also, as another commenter points out, much of your solutions add potentially unnecessary code, classes, and abstraction.
Related: https://www.computerenhance.com/p/clean-code-horrible-performance
I’ve been on one call where a maintenance window got backed out specifically because it dragged out too long and we decided to throw in the towel rather than risk fatigue-related mistakes.
This is one of the downsides of not having scheduled deploys and maintenance windows…you lose the ability to say “Hey, look, we have a deploy tonight, don’t get too fucked up at the bar. Hey, look, in two weeks we have a big migration, please make sure you’re well-rested ahead of time.”
This is also why I’m a huge fan of runbooks for ops; you basically frontload the executive functioning/decisionmaking to a known good time, and then you have a greater margin of error at runtime–the dumbest code I’ve ever written was when I was too tired to pop up a level and reorient, but energetic enough (the borderline mania of the sleep-deprived) to keep trying different stuff.
(It’d be a bit interesting to also see the answers for high, stoned, tripping, or what have you–I’ve known folks to do hot work while under the influence of all kinds of interesting substances.)
Oh god yes, I’ve been there many times. You spend hours tweaking and tuning and trying different things, often changing multiple things at the same time (which is almost never wise). And then the next day after a proper rest you need 5 minutes and come up with the “obvious” fix.
Runbooks are great. If you can make it a testable script, even better.
I usually like this because it makes the code more readable and understandable, but you often have someone come along and “optimize” it, or reject it because it could be ‘optimized’ further.
was it the owner who told him he wasn’t working hard enough or some manager? he did a lot of good but that part bothers me.
It was during his review, so most likely a manager.
Sounds like a manager, and more evidence for the saying that people don’t leave bad companies, they leave bad managers.
The previous programmer being out of contact is a red flag.
After ChatGPT is sufficiently discussed online, and the next iteration contains these discussions, you will also be able to do a few prompts in succession like: