Threads for Hultner

  1. 10

    I suppose it’s worth pointing out that Optional is simply syntactic sugar for Union[NoneType, T].

    1. 6

      Or with the new syntax simply None | T

    1. 4

      We do one day per week, affectionately called “bugday”. Bugs are tracked with GitHub issues, reported by staff, triaged by engineers and leads, then split up for bugday. When you are done with your split of bugs, you can return to project work, which we manage in a Notion board per Iteration (similar to Trello). If bugs are gnarly enough, they become iteration projects.

      1. 1

        That sounds like a healthy practice. Does it work well in reality?

        I’ve commonly seen the triage step skipped, which is unfortunate because it makes working with bug reports so much more fun. Don’t know how many hours (days) I’ve spent fixing some badly reported bug.

        1. 4

          We’ve been doing it for 8+ years. It started when the engineering team was 3-4 people, and that team is now 25-30 people. It’s one of our most beloved processes (even if we are occasionally grumpy about a gnarly bug), so I’d say it works very well. Sometimes it requires some upkeep, e.g. declaring bankruptcy on certain bugs or classes of tickets, or tweaking the triage process. But, in general, we’ve kept up with “one day per week for bugs” and “keep the bug list small” for years, and I can’t imagine doing it any other way.

          You’re right that it’s important to make time for the triage step. On our teams, what happened over time is that a team’s weekly checkpoint meetings would happen on Tuesdays and bugday would be Wednesday, so each team would spend 15 minutes in their weekly meeting triaging/splitting/assigning bugs so that could people could show up on Wednesday with clear assignments to whittle down the list.

          The other nice thing about bugday is that it creates a “pressure relief valve” with the support team, but with a reasonable-ish SLA beyond “hotfix”. That is, either something is a “hotfix”, or it’s a “bugday ticket”. If it’s a “hotfix”, there better be a damn good reason (like, a security problem or serious breakage). But, if it’s not in one of those clearly awful areas, it’s just a bugday ticket, which means it gets worked on after the ticket has had some time (a few work days, maybe even a week) to sit, be triaged and lightly investigated, and prioritized alongside other tickets. This avoids the must-fix-now, interrupt-laden, reactive culture you see on a lot of teams with widely-used products.

          Also, I’ll mention that as a product becomes very widely used, your ticket list starts to get dominated not by bugs, but more by “customer technical questions” (that can look like bugs on the surface). In this case, you really need to separate those two things, and have a technical support staff (perhaps even with light coding/debugging background) focus on the non-bugs, and also triage every ticket, and only escalate the “may-be-a-bug” or “definitely-a-bug” items for the core engineering team.

          1. 3

            Sorry if this sounds stupid, but what does your workload look like on the other four days?

            It seems to me that devoting one day per week for bugs is to either:

            1. “Protect” bug fixing time by setting a minimum (no less than one day per week); or
            2. “Limit” bug fixing time by setting a maximum (no more than one day per week.)

            I realize you’re saying this process works for you, but I don’t understand how. If a process artificially limits bug fixing to less than what’s necessary, the number of bugs will grow over time. If it artificially allocates more time than is necessary, the number of bugs (or bug severity) should fall until the day seems scarcely worthwhile. In either scenario, the amount of time to spend seems like it should be subject to a feedback loop, right?

            1. 2

              No, not at all – this isn’t a stupid question whatsoever!

              So, I think it’s both a “protect” and a “limit”.

              For “protect”, it’s basically saying, “We’re going to think about paying down the bug list every week for up to a whole workday per engineer, if necessary.” So it sets a weekly cadence, weekly reminder, weekly focus.

              For “limit”, it’s saying, “We’re not going to work on bugs in drips and drabs all week, because we want to devote 80%+ of our ‘good hours’ to development of long-term projects. We’re also not going to let low or mid-priority bugs derail the iterative development process and the projects we’ve already committed to – we’re not going to devolve into a reactive culture. Only ‘hotfix’ bugs can ‘skip the queue’. Otherwise, we calmly work on our committed projects in 4-week timeboxed iterations.”

              It’s also a timeboxing technique: is there some way to fix this bug within a single day, rather than making it a 3-5 day investigation and fix?

              The timeboxing part is perhaps the most useful tool, and also answers your other question. For bugs that can’t get fixed in a single day, we end bugday with a question: “Can this bug fix wait till next week’s bugday, where we will only get another full workday to take a crack at it?” Or, “Is this bug big, gnarly, or important enough that we should ‘promote it’ to a full-blown team/iteration project?” Sometimes, low-priority bugs reveal underlying issues that truly deserve to get fixed, but need to get scheduled into an iteration to get fixed properly. The bugday process prevents us from doing this “preemptively”, instead, we do it “just-in-time”.

      1. 8

        Having recently struggled with this topic (again), I think there are some inaccuracies in the article, it especially seems to overstate both setup.py and the PyPA’s role in the ecosystem. E.g.:

        poetry has an interesting approach: it will allow you to write everything into pyproject.toml and generate a setup.py for you at build-time, so it can be uploaded to PyPI.

        …but a setup.py file isn’t a requirement for uploading to PyPI; Flit can happily build and publish distributions to PyPI without ever reading or generating a setup.py file, and those same packages can be consumed by pip without fanfare.

        What I want to see is an article that tries to unbundle what is meant by “Python Packaging” and which use cases are covered by various common tools (setuptools, pip, build, twine, virtualenv, venv, virtualenvwrapper, pip-tools, flit, poetry, pipenv, conda, pants, pdm, etc), because there are at least half a dozen distinct tasks in the Python development + publishing lifecycle, and part of the confusion is that you can mix and match tools to cover as many or as few of those cases as you want. In a way that’s almost antithetical to “There should be one– and preferably only one –obvious way to do it.”

        For example, build only concerns itself with creating distributions, and twine only with publishing those to PyPI. Both are complementary to modern Setuptools, which can be driven solely by a declarative setup.cfg file.

        …or if you wanted a unified tool to handle describing + building + publishing your packages, you could switch to Flit, which covers all three tasks.

        …but that still leaves you manually managing dependencies and environment isolation during development. If you wanted tools for that, you could reach for pip and venv, which complement either Flit or Setuptools + Build + Twine workflows.

        …or you could switch to something like Poetry or Pipenv which handle all of those tasks in a single, omnibus tool.

        Edit: Pipenv doesn’t handle building + publishing, but it does consolidate dependency and virtualenv management into a single tool.

        1. 2

          Poetry does handle all these but last time I looked at pipenv it doesn’t really handle tasks related to packaging and publishing but rather focuses on consumption of packages.

          1. 2

            Holy cow, you’re right :) I could’ve sworn there was something, but nope.

          2. 1

            What I want to see is an article that tries to unbundle what is meant by “Python Packaging” and which use cases are covered by various common tools

            Mentioned in another comment, but I made an attempt at this a while back.

          1. 38

            To me, this really drives home the need for language projects to treat dependency and build tooling as first-class citizens and integrate good, complete tools into their releases. Leaving these things to “the community” (or a quasi official organization like PyPA) just creates a mess (see: Go).

            1. 9

              100% agree. I recently adopted a Python codebase and have delved into the ecosystem headfirst from a high precipice to find that’s improved drastically from the last time I wrote an app in Python — 2005 — but still feel like it’s in disarray relative to the polish of the Rust ecosystem and the organized chaos of the Ruby and JVM ecosystems in which I’ve swum for the last decade. I’ve invested considerable time in gluing together solutions and updating tools to work on Python 3.x.

              The article under-examines Poetry, which I find to meet my needs almost perfectly and have thus adopted despite some frustrating problems with PyTorch packages as dependencies (although PyTorch ~just fixed that).

              1. 5

                I also think poetry isn’t being considered enough. The article gives the impression that the author doesn’t have a lot of hands on experience of poetry but is curious about it. I’d recommend further exploring that curiosity. I understand that it’s hard to cover everything in a short article like this. If you’ve got an existing project using a working setup a lot of the points make sense and there’s no need to hurry up and change your setup. But I wouldn’t really call it a fair assessment of “The State of Python Packaging in 2021”.

                From my point of view it’s clear that pyproject.toml is the way going forward and is growing in popularity. Especially with the way considering it’s also required to specify the build system with modern setuptools going forward.

                As for the setup.cfg requiring an empty setup() with an setup.py is a half truth at best. It’s true that PEP-517 purposely defers editable installs to a later standard in order to reduce complexity of the PEP. But in practiceit’s not required if you use setuptools of a version equal to or greater than v40.9, released in spring of 2019. This is documented in the setuptools developers guide, if a setup.py is missing setuptools emulates a dummy file with an empty setup() for you. If you build you project with a PEP517/518 frontend you don’t need the setup.py. Having static setup.cfg is a massive improvement for the ecosystem as a whole since we can actually start resolving dependencies statically without running code, this benefit for the ecosystem as a whole should not be downplayed.

                I get the feeling that the author want to wait for a pipe-dream future where everything is perfectly specified and standardised before starting to adapt any of the new standards. I see this as completely fine and valid if you’re working on your own project, especially if you’ve already got existing working code. That said, in my opinion, I wouldn’t recommend it as the approach for everyone. I see it as necessary to start using the new standards on new projects so that we can start going forward, if we’re always clamping to the old way of doing things it’s going to be hard to progress and the progress will be hampered.

                I get the impression that the author is very knowledgeable and have plenty experience in the area, and I see the article as reflecting the opinion of the author which I respect but don’t fully agree with. I would love to have a chat with the author given the opportunity and hear more about his opinions. I’m also looking forward to read the 2022 edition next year. It’s also easy for me to contest some of the points here but it’s not completely fair without a reply from original author where he’s given a chance to elaborate and defend their choices.

                Full disclosure: I’m currently writing a book on the subject and I’ve researched the strides in Python Packaging quite heavily in recent time.

              2. 3

                just creates a mess (see: Go).

                It’s fair to say that packaging is a mess in Python but why exactly is packaging in Go a mess? Since 1.13 we have Go Modules which solves packaging very elegantly, at least in my opinion. What I especially like is that no central index service is required, to publish a package just tag a public git repo (there are also other ways to do that).

                1. 8

                  Yeah, Go is fine now, but in the past, when the maintainers tried to have “the community” solve the packaging problem it was a mess. There were a bunch of incompatible tools (Glide, dep, “go get”, and so many more) and none of them seemed to gain real traction. Prior to Go modules the Go situation looked similar to the current Python situation. To their credit, the Go developers realized their mistake and corrected it pretty quickly (a couple years, versus going on a couple decades for Python, so far).

                  1. 1

                    Thank you for the explanation.

                    Prior to Go modules the Go situation looked similar to the current Python situation.

                    Yes, I agree with you that the situation was similar before Modules were a thing. I was fed up with the existing solutions around that time and had written my own dependency management tool as well.

                  2. 4

                    It’s fair to say that packaging is a mess in Python but why exactly is packaging in Go a mess?

                    Not the original poster, but I think it’s because modules weren’t there from the start, and this allowed dep, glide, and others to pop up and further fragment dependency management.

                1. 22

                  Remember that for compatibility reasons Erlang will always use sv_SE.iso88591 encoding when running the re module.

                  As a Swede this fills me with patriotic pride.

                  1. 5

                    And as an “Erikson” I suspect ;)

                    But yes, we can at least have this one quirk considering how many of the American quirks we already have to deal with when handling “Swedish” data.

                  1. 2

                    It would be interesting to see what quality you’d get out of this with a faster lens, say a F1.8 or lower. It does look good but no better then for instance using an slightly older iPhone and a $20 NDI-compatible app for streaming, but with a faster lens maybe it would.

                    I have a EOS RP for streaming which is great but I’ve used the iPhone with the OBS-Camera app on the go for a more portable setup which works better then expected, I don’t really use OBS for live-streaming/videocalls but rather use the NDI-streaming directly to NDI virtual cam and use it as a standard webcam.

                    1. 6

                      I really like the concept of Kakoune, tried it a couple of times and inverting the movement to action order is a great UX improvement. But unfortunately my muscle memory in vi is keeping me there, the differences is simply slowing me down too much. I would however love to see someone to “steal” Kakoune style editing in a vim extension, visual mode is the closest we have, which I do use quite a bit but it’s not quite the same.

                      1. 7

                        I might misunderstand you, but if muscle memory is what’s keeping you in vi, wouldn’t it also keep you from using such an extension?

                        1. 1

                          The upside is that such an extension could be used on an opt in basis, e.g. by toggling it via a :ToggleKak or as an additional mode which can be entered from normal mode. This would allow me to instantly switch back to normal vi-mode thus making me able to get used to it gradually.

                          Additionally I was thinking an extension that keep more original vim movements instead of keeping some and changing others. Word is the same in kak but for instance foregoing g and GG for instance is a massive hassle, I don’t recall on top of my head what more was missing but there was quite a few changes. These changes probably makes sense if you start from a blank slate and thus makes sense for a new editor but as an extension to vi/m I’d rather see adherence to the old learned movement words.

                          Edit: Some things that I notice missing/changed at once when starting kak again and just trying to navigate a little bit in the project I’m working on right now:

                          • Folding, zc/zo
                          • Set line numbers, :set nu
                          • Go to line, :n where n is line number
                          • gcc/cs are popular plugins for comment and change surrounding, these are popular enough to be ported to vi-mode in other editors like vscode.
                          • At this point I’m going back to vi because it’s unfortunately slowing me down to much to get real work done.

                          Now I still love what kak is doing and if I weren’t already a vim user a lot of the things probably make a lot more sense.

                          1. 5

                            I found that the bindings that got changed from Vim was mostly an improvement in consistency, whereas original Vim bindings are constrained by their legacy. For instance in Kakoune shifted version of keys “extend” your current selection rather than move it. Thus, G is just g (which enters “goto” mode that includes g for buffer begin, j for buffer end etc.) that extends the selection rather than moving it, which is why you need to use gj instead of G to go to the buffer end.

                            Other than folding (which is not currently possible) your other missing features are actually available right now, so if you decide to give it a go again here are some pointers:

                            Set line numbers, :set nu

                            This is available as a highlighter, documented in :doc highlighters and the wiki page

                            Go to line, :n where n is line number

                            This is available with <n>g, from :doc keys

                            gcc/cs are popular plugins for comment and change surrounding, these are popular enough to be ported to vi-mode in other editors like vscode.

                            This is built-in with :comment-line and :comment-block commands, but not mapped to a key by default

                            I can’t blame someone much for not being able to discover some features – while in-editor help with the clippy and reference documentation with :doc is pretty great, it doesn’t have a good “user manual” like Vim that the user can discover features through. The wiki also helps but is not very well organized. TRAMPOLINE is a decent vimtutor replacement, but hard to find unless you know it exists.

                            1. 1

                              Thanks, that’s hugely helpful. Will for sure try out trampoline next time I give it a spin, I do love vimtutor.

                        2. 3

                          Similarly for me, kak just simply isn’t available as ubiquitously as vi(m). I fear relearning the muscle memory would be a detriment in the long run as I would still likely need to switch back to vi(m) fairly frequently

                          1. 2

                            What might be left out about how common place vi(m) is the fact that there are vi(m) modes for A LOT of things, I’m talking editors like VSCode, IDE’s like JetBrains suite, Visual Studio, emacs evil, etc. but most importantly all major browsers (vimari/vimium/vimpinator/pentadactly/qtBrowser/w3m/etc), window managers (ultimate macOS for one), tmux, shells, scripting via s(ed), and more. Wherever these will diverge from kakoune there will be friction in daily usage.

                            Again this isn’t criticism to kakoune just a note on how ubiquitously available the vi(m) keybinding set really is.

                            Additionally to that I’ve worked with industrial control systems often being quite literary air gapped (not uncommonly located in rural places without internet connection) running some flavour of BSD/Linux, vi is pretty much always available for some quick adhoc configuration at a customer site, anything else, not so much.

                            1. 2

                              Yeah, this is also a factor for me, though less so as I have never been happy with a vim plugin/emulation layer.

                              1. 3

                                The one in VSCode is surprisingly good if you bind it to a neovim backend. Onivim is also interseting but more expimental.

                                1. 1

                                  Have any sources on the neovim backend? I use neovim as my daily editor and was unimpressed by VSCodes vim plugin about ~1 year ago, but using neovim as a backend might address my concerns.

                                  I’ve tried OniVim2, as I purchased a license way back when it was fairly cheap. Their current builds seem very promising.

                            2. 2

                              What distro are you using that doesn’t have a Kakoune package available? About a dozen are supported, it’s surprising to hear that the editor isn’t widely available.

                              1. 5

                                What distro are you using that doesn’t have a Kakoune package available?

                                Telecom-deployed clusters of RHEL VM’s.

                                1. 1

                                  Can you not use this RHEL package?

                                  1. 1

                                    First, no el7 build, second, getting it there would be problematic at best (in terms of file copying).

                                2. 2

                                  Alpine Linux. I also occasionally deal with embedded ‘distros’ that don’t have a package manager .

                                  1. 4

                                    I can see a package for Alpine, I’ve installed it myself in Docker containers.

                                    In any case, it’s true that muscle memory is a big obstacle when trying to use one or the other. But when you switch over to Kakoune, it’s harder in my experience to go back to Vi bindings (for example when there’s absolutely nothing else installed on a random system).

                                    1. 1

                                      When I use vim nowadays (and I install kakoune anywhere I need to edit anything), I am staying only with the very limited set of commads.

                                      1. 1

                                        The Alpine package is on testing/edge, which isn’t considered stable. I already intentionally stick to stock vi(m) configurations as much as possible to alleviate muscle memory issues, switching seems like a step backwards to me despite the overall UI/UX language being better.

                              1. 1

                                As for the mobile apps I’ve been using Beorg and keep the journal in iCloud, this way I get seamless syncing with no configuration.

                                It works great for me, but I might be missing some features that I don’t need, I’m no emacs aficionado; org-mode is my sole use case and I use vi for everything else. I used to just keep a markdown file with checkboxes for the same purpose but good a good mobile app was what made me switch a couple of years back, and of course some built in handling of time stamps for various things.

                                1. 1

                                  You might give emacs+evil a try, if you haven’t yet. I was in a similar boat to you (emacs for magit, and vim for everything else) and it turns out that evil is pretty rad (possibly the best vi-emulation layer in any non-vi I’ve tried), and the switch was relatively painless.

                                1. 4

                                  A measure which ties in with business goals is speed of completing common tasks, for instance an endpoint for a common api, of course this will be a bit misguiding in the beginning while the team(s) are getting up to speed with the new patterns and structure.

                                  Another measure worthwhile to look as is number of defects in newly produced code as well as time required to resolve said defects, a new architecture should ideally reduce number of defects and make them easier to pinpoint/resolve.

                                  1. 1

                                    This is a good point! A clear way to see if you’re becoming more efficient. Just long as the defects are reported properly and not fixed “under the radar”, but that would be a whole other problem.

                                  1. 2

                                    The Sinc looks nice for a ISO-split. I spent a fair bit of time last year and in the beginning of this researching high quality fully split ISO-compatible keyboards after I had worn out yet another Microsoft Sculpt. Ended up ordering a Dygma Raise a couple of months ago, hopefully it will arrive in September if there aren’t any more delays.

                                    1. 8

                                      So I’m a bit lost here, is it macOS running on top of the Linux kernel or is this just using docker to sandbox kvm? If so is there any benefit of using this over kvm directly?

                                      1. 2

                                        The latter… I think the “interesting” part is that it’s using something called “gibMacOS” to frankenstein the macOS build from Apple update files as part of the container build, rather than using an existing system image (which might conceivably be legitimate).

                                      1. 1

                                        I’m working on a training app focusing on building muscle and logging data and measurements in order to show long term trends and progress, detect and overcome plateaus.

                                        It’s a fun side project where I can try out some new techniques I’ve had an eye on for a while in a no stakes manner while it’s also filling a personal need. It’s also a great way to see if any of these things could be suitable for a client further ahead.

                                        For the backend I’m using

                                        • PostgreSQL 12 (I know it’s not ready yet but I’m playing with some new stuff like then new generated stored columns)
                                        • Python 3.8
                                        • FastAPI for async, typed REST (with OpenAPI specs) & GraphQL API
                                        • PyDantic, for lean dataclass-esque data validation and modelling
                                        • asyncpg as a fast async PostgresSQL driver for Python (alternative to the much more common psycopg2 which I usually use)
                                        • Hypothesis property based testing and “Swagger Conformance”-testing

                                        And for the frontend I’m planning to use

                                        • App
                                          • React Native App, focus on iOS
                                          • Health kit integration
                                          • Offline use, online sync
                                        • Desktop/web
                                          • React (maybe next.js), rest is TBD

                                        So far I’ve built the data-model, designed a draft API, implemented the model in Postgres (it’s fully functional and usable, have been using Google Sheets/Excel as a frontend in the gym until the app is done) Modelled my workout plans as a YML format which can be added to the database (no API for this yet), the app is plan agnostic to make it more reusable. Started on a CLI, using click which does make usage of the async driver a little bit more interesting since click isn’t (not a problem though).

                                        A couple of years back I built a CLI app for tracking statistics, trends and progress from another app I were using but they were bought and the API were shut down, there for I decided to create my own app instead this time around.

                                        My next step is building out the API to a fully functional state, adding a basic web frontend so I can use it from my phone in the gym and try out the model/concept before building out the app.

                                        Problems I’ve ran into so far: PG12 is not greatly supported by tools and database drivers. For instance the DataGrip IDE by JetBrains crashes on introspection if I try to connect to my database. Syntax highlighters also scream at the new syntax. As a backup plan I’ve also implemented the model in a fully PG11 compatible way although it’s slightly more verbose and less neat, I’ve been running the databases on both PG12 and PG11 in parallel for now.

                                        1. 2

                                          How do you like FastAPI? Considering using it for something.

                                          1. 1

                                            I like it very much, especially with pydantic. It’s replaced flask as a go to framework for new projects nowadays and definitely my favorite of all starlette/Asgi frameworks I’ve used. I made a talk about pydantic where I briefly touch on FastAPI recently https://youtu.be/WJmqgJn9TXg

                                            Also swagger-conformance have been superseded by schemathesis for automatic test generation for OpenAPi-specs.

                                            1. 1

                                              What are you using for your frontend? With Flask were you doing server side rendering or just using it as an API?

                                              1. 2

                                                No just as an API, I have used the jninja templates back in the days but not in a long time now, I mainly use react on the front-end side of thingsthese days. I can recommend next.js for react if you want react with smoke server side rendering though, it’s very nice, I used it for a client project earlier this year.

                                        1. 2

                                          I just use <a href="#">Top</a> on my website, always have. https://hultner.se/

                                          Is there any reason to use a defined id/name instead?

                                          1. 1

                                            I don’t recall for sure which browsers support which, but I remember for sure that the <a name=foo> [...] <a href="#foo">foo</a> is by far the most compatible way of doing it, while other methods do not work across all browsers.

                                            1. 1

                                              Would be interesting to know which browser doesn’t support the standard #-method. Don’t think I’ve ever noticed it not work in any substantial browsers.

                                          1. 2

                                            This is my first time recording a YouTube video like this so I’m glad for pointers :)

                                            1. 5

                                              Speaking at the python pizza remote conference.

                                              I have a talk on pydantic, a python library that adds amongst other things runtime type checking/enforcement to python classes and functions using standard python type annotations.

                                              1. 2

                                                Good luck for your talk!

                                              1. 1

                                                The iPhone booting postmarketOS shouldn’t be anything shocking. You can run anything from linux to windows 95 ever since the first jailbreaks.

                                                1. 2

                                                  a) not true, Windows 95 was never running on a jailbroken device natively - only as a user-land application with iOS/iPhoneOS as a host system

                                                  b) With that attitude anything anyone could ever accomplish is not worth noticing or “shocking” because it is accomplishable.

                                                  1. 0

                                                    Well first Linux is misleading, I ran Linux on my iPhone 3G more than a decade ago with open iBoot.

                                                1. 2

                                                  I don’t get the purpose of this, isn’t it easier just to switch sessions the regular way with ^b( and ^b)

                                                  1. 2

                                                    I do that as well (as well as prefix-s) but it doesn’t start up new sessions in the directory I care about automatically. It’s a couple commands to start one, and potentially getting it wrong if I forget that a session already exists… classic shell script territory 😆

                                                    1. 1

                                                      I see, we probably use sessions slightly different. I use a session per active project/customer I’m working on plus one generic/sandbox and usually I have around 3 sessions open, one session usually consist of 2-9 windows with a couple of panes each where I have a window per subproject in a larger project. For instance when I’m working on a client code base where there’s subproject for frontend, a couple of backed services, database, infra, documentation I keep a window for each of them named after their respective name or purpose and name the whole session after the client/project.

                                                  1. 4

                                                    Precisely for this reason I just bought the LG 32UK550-B for around 340 EUR.

                                                    It’s a great monitor but a word of caution if you use MacOS: for some reason, not all scaling option are available to all monitors.

                                                    I can only choose to run it at native resolution, where everything is way too tiny, or scaled to 1920x1080px, and everthing is way too big. It does not provide me the option to run it at 1440px retina. This sucks quite a bit.

                                                    I’m still trying to figure out why, given that at work we are provided with a 4k monitor and I’m running it at 1440px without any issues. Maybe it’s because the work monitor runs via USB-C, whereas the one I bought is connected via HDMI.

                                                    EDIT: I’m fairly sure the monitor I use at the office is the HP EliteDisplay S270

                                                    1. 2

                                                      I have a very similar LG monitor – believe the model number is slightly different due to white labeling.

                                                      My nightmare of troubles was solved when I moved away from HDMI which was apparently too low-bandwidth to drive 4K above 30Hz (yuck), and to a reliable USB C 3.1 / Superspeed+ cable because the one I pulled from a grab bag at work was apparently a USB C 3.0 cable and didn’t support the right Displayport alternate modes.

                                                      I also learned at least a bit about all the distinctions above and truly regret that I did.

                                                      I now have a comfortable setup I am happy with a decent amount of scaling options and can at least tell you what options I have, if it would help… I’m not entirely certain I am or if I could run at “1440px retina” because this new world of scaled resolution is near incomprehensible to me :)

                                                      1. 2

                                                        Oh yeah, USB C version scheme is true nightmare material.

                                                        My monitor doesn’t offer USB C connectivity, however supposedly I’m using a HDMI 2.0 cable wich should support 4k 60Hz and above. I don’t have a Display Port to USB-C adapter to test if maybe the situation is different over Display Port.

                                                        1. 1

                                                          I managed to get a “Thunderbolt 3” dock from work and went DP -> dock (MDP) -> USB-C Displayport to my Macbook. It was a mind-bogglingly difficult mess to work through – which is why when the “mandatory WFH” order went out I was sure to go pick up all the gear because I knew I’d never reproduce this at home.

                                                          1. 1

                                                            I use a USB-C -> (mini-)DisplayPort alt mode cable, and it is so much better than the world of hurt that is HDMI (which I used before). From HDMI cables with too low bandwidth to glitching after the system resumes from sleep (though IIRC that only happened on Linux).

                                                            1. 1

                                                              I use RDM (Retina display manager) to access more resolution modes then otherwise available, have you tried something like this?

                                                              1. 2

                                                                Yep, I tried RDM and also SwitchResX without any success.

                                                                1. 1

                                                                  Ah switch res x would been my next thing to try, that success. Something I’ll look out for when buying 4k monitors in the future, thanks for the heads up.

                                                          2. 1

                                                            Get a top-shelf DP cable, and switchresX, I needed that to drive my LG 31” correctly from my older macs, I haven’t tried it with my new 16” tho.

                                                          1. 1

                                                            Not quite 4k but I used the LG Ultrafine 5k combined with a 5k-iMac, it also plugs into my MacBooks whenever I need it for them. Works great, no problems, best monitor I’ve ever owned.

                                                            Had some problems with the WQHD version of the Z27 where the USB-C connection would stop working every few weeks, turned out that the settings would revert to power saver mode at some unknown interval of time which in turn disabled USB-C, pulled my hair for a while before I figured out what was happening the first time. It also only supplied 15W power over USB-C so it wasn’t really enough to charge my laptop while using it so I had to still use an external charger which felt like a step backwards.