Threads for vosper

  1. 1

    You can just use nvm (or one of the other managers) and you won’t run into needing be root to globally install packages. These days npx eliminates a lot of the cases where you might have previously installed a package globally.

    1. 1

      Yup - the author includes the suggestion of nvm or Volta at the end. I will note, however, that installing something globally is immediately invalidated as soon as you change your nvm version.

      The author also makes npx sound like such a burden, but it’s actually the best method moving forward as it’s either global OR local to a project, depending on if the project has installed it. I don’t buy the “extra words to run a command” argument - especially since they try to sell needing to cd into a specific directory as a valid option.

    1. 3

      I’m not sure it’s quite what you’re after, but maybe take a look at trpc. I’m using it happily at work, but if you’re not using Typescript everywhere then it might not be a good fit.

      1. 3

        Just a reminder that the DB that did this best was (is) RethinkDB. Such a shame that the company failed, but it’s still available as an OSS project.

        https://rethinkdb.com/

        1. 4

          It’s a bit different? Looks like RethinkDB is a database with a query language that isn’t just “use JS primitives like map and filter and we’ll figure the search out”, which appears to be what ChilselStrike is going for.

          Here’s a RethinkDB query, from their docs

          r.table('authors').filter(r.row('posts').count().gt(2))
          

          That looks like a pretty standard ORM pattern - you probably don’t need RethinkDB to write queries that look like that. I guess the ChiselStrike version would be more like

          await Authors.findAll().filter(author => author.posts.count > 2)
          

          And their magic is that the bits of the query that can be expressed as efficient SQL are, and the rest is done in JS, and as a developer you don’t have to think about it.

          1. 1

            I had kind of gotten the impression that it was defunct, I’m surprised to see a release in April.

            I had a lot of hope for it when it came out but when I tried using it, it didn’t seem all that performant.

          1. 1

            How does ChiselStrike handle migrations? Is that a separate problem for a developer to solve? If so then it seems like you can’t really escape knowing about the underlying database. If migrations are handled by ChiselStrike I’d be really interested in how it works - there’s a hint in the blog post about being able to figure out which indexes are needed based on the queries it sees, which implies that it might perform DDL.

            I think it’s a really interesting idea. I suppose one destination is that ChiselStrike becomes the data store as well.

            It’s also interesting to compare with EdgeDB, too, who I think are coming at things from a different angle. Their approach is more like “let’s make a better data model and search language”. I quite like what they’re doing, though adopting a whole new database [0] is possibly more of a leap than using ChiselStrike over a known and proven database.

            [0] kind of, the boundaries between EdgeDB and underlying Postgres are blurry to me, probably due to my lack of understanding

            1. 2

              ChiselStrike handles a large variety of migrations itself. You can add fields, remove fields, change defaults, etc. Here’s one example of that being done and deployed instantly in the ChiselStrike platform: https://www.youtube.com/watch?v=bxI7VcY9_gg

              We’re looking into supporting other things with potential side effects too, like type changes.

            1. 9

              Is there any proof that the telemetry data is NOT put to good use to improve VSCode ?

              1. 28

                I think there’s a tinge of paranoia that runs through the anti-telemetry movement (for lack of a better term; I’m not sure it’s really a movement). Product usage telemetry can be incredibly valuable to teams trying to decide how best to allocate their resources. It isn’t inherently abusive or malignant. VSCode is a fantastic tool that I get to use for free to make myself money. If they say they need telemetry to help make it better than I am okay with that.

                1. 9

                  I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc.

                  But MS shot itself in the foot by making telemetry mysterious and impossible to inspect or disable. They made people allergic to the very idea.

                  1. 12

                    I think the overly generic name does not help the situation. When people are exposed to telemetry like “we’ll monitor everything and sell your data”, I’m disappointed but not surprised when they block everything including (for example) rollbar, newrelic, etc

                    It’s a bit uncharitable to read “they blocked my crash reporting service” as “they must have some kind of misunderstanding about what telemetry means” (if that’s what you’re implying when you say you’re disappointed but not surprised that people block them).

                    I know exactly what services like rollbar do and what kinds of info they transmit, and I choose to block them anyways.

                    One of the big takeaways from the Snowden (I think?) disclosures was that the NSA found crash reporting data to be an invaluable source of information they could then use to help them penetrate a target. Anybody who’s concerned about nation-state (or other privledged-network-position actor) surveillance, or the ability of law enforcement or malicious actors impersonating law enforcement to get these services to divulge this data (now or at any point in the foreseeable future), might well want to consider blocking these services for perfectly informed reasons.

                    1. 5

                      I believe that’s actually correct - people in general don’t understand what different types of telemetry do. A few tech people making informed choices don’t contradict this. You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others. You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                      So no, I don’t believe the general public understands how many things are lumped into the telemetry idea and they don’t have tools to make informed decisions.

                      Side-topic: MS security actually does aggregate analysis of crash reports to spot exploit attempts in the wild. So how that works out for security is a complex case… I lean towards report early, fix early.

                      1. 7

                        You can see that for example through adblock blocking rollbar, datadog, newrelic, elastic and others.

                        I’m not following this argument. People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry – which includes the possibility of data related to their machines being used against them.

                        Adblocker developers (correctly!) recognize that datadog/rollbar/etc are vectors for some of those harms. The not every person who installs an adblocker could tell you which specific harm rollbar.com corresponds to vs which adclick.track corresponds to, does not imply that if properly informed about what rollbar.com tracks and how that data could be exploited, they wouldn’t still choose to block it. After all, they’re users who are voluntarily installing software to prevent just such harms. I think a number of these people understand just fine that some of that telemetry data is “my computer is vulnerable and this data could help someone harm it” and not just “Bob has a diaper fetish” stuff.

                        It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                        You can also see it on bug trackers where people start talking about pii in telemetry reports, where the app simply does version/license checks. You can see people thinking that Windows does keylogger level reporting back to MS.

                        That some incorrect people are vocal does not tell us anything, really.

                        1. 3

                          It’s kind of infantilizing to imagine that most people “would really want to” give you their crash data but they’re just too stupid to know it, given how widely reported stuff like Snowden was.

                          Counterpoint: Every time my app crashed, people not only gave me all data i asked for, they just left me with a remote session to their desktop. At some point I switched to rollbar and they were happy when I emailed them about an update before they got around to reporting the issue to me. So yeah, based on my experience, people are very happy to give crash data in exchange for better support. In a small pool of customers, not a single one even asked about it (and due to the industry they had to sign a separate agreement about it).

                          That some incorrect people are vocal does not tell us anything, really.

                          The bad part is not that they’re vocal, but that they cannot learn the truth themselves and even if I wanted to tell them it’s not true - I cannot be 100% sure, because a lot of current telemetry is opaque.

                          1. 3

                            I don’t know how many customers you have or how directly they come in contact with you, but I would hazard a guess that your business is not a faceless megacorp like Microsoft. This makes all the difference; I would much more readily trust a human I can talk to directly than some automated code that sends god-knows-what information off to who-knows-where, with the possibility of it being “monetized” to earn something extra on the side.

                          2. 3

                            People install adblockers because they care about their privacy, and dislike ads and the related harms associated with the tracking industry

                            ooof that’s reading way too much into it. I just don’t want to watch ads. And as for telemetry, I just don’t want the bloat it introduces.

                    2. 7

                      The onus is not on users to justify disabling telemetry. The ones receiving and using the data must be able to make a case for enabling it.

                      Obviously, you need to be GDRP-compliant too; that should go without saying, but it’s such a low bar.

                      Copy-pasting my thoughts on why opt-out telemetry is unethical:

                      Being enrolled in a study should require prior informed consent. Terms of the data collection, including what data can be collected and how that data will be used, must be presented to all participants in language they can understand. Only then can they provide informed consent.

                      Harvesting data without permission is just exploitation. Software improvements and user engagement are not more important than basic respect for user agency.

                      Moreover, not everyone is like you. People who do have reason to care about data collection should not have their critical needs outweighed for the mere convenience of the majority. This type of rhetoric is often used to dismiss accessibility concerns, which is why we have to turn to legislation.

                      If you make all your decisions based on telemetry, your decisions will be biased towards the type of user who forgot to turn it off.

                    3. 9

                      This presumes that both:

                      a) using data obtained from monitoring my actions to “improve VSCode” (Meaning what? Along what metrics is improvement defined? For whose benefit do these improvements exist? Mine, or the corporation’s KPIs? When these goals conflict, whose improvements will be given preference?) is something I consider a good use in any case

                      b) that if this data is not being misused right now (along any definition of misuse) it will never in the future cross that line (however you choose to define it)

                      1. 2

                        Along what metrics is improvement defined?

                        First step would be to get data about usage. If MS finds out a large number of VSCode users are often using the json formatter (just a example) i assume they will try to improve that : make it faster, add more options etc etc.

                        Mine, or the corporation’s KPIs

                        It’s an OSS project which is not commercialized in any way by the “corporation”. They are no comemrcial licenses to sell, with VSCode all they earn is goodwill.

                        will never in the future cross that line

                        Honest question, in what way do you think VSCode usage data be “missused” ?

                        1. 12

                          i assume they will try to improve that : make it faster, add more options etc etc.

                          You assume. I assume that some day, now or in the future, some PM’s KPI will be “how do we increase conversion spend of VSCode customers on azure” or similar. I’ve been in too many meeting with goals just like that to imagine otherwise.

                          It’s an OSS project which is not commercialized in any way by the “corporation”

                          I promise you that the multibillion dollar corporation is not doing this out of the goodness of their heart. If it is not monetized now (doubtful – all those nudges towards azure integrations aren’t coincidental), it certainly will be at some point.

                          Honest question, in what way do you think VSCode usage data be “missused” ?

                          Well, first and most obviously, advertising. It does not take much of anything to connect me back to an ad network profile and start connecting my tools usage data to that profile – things like “uses AWS-related plugins” would be a decent signal to advertisers that I’m in the loop on an organization’s cloud-spend decisions, and ads targeted at me to influence those decisions would then make sense.

                          Beyond that, crash telemetry data is rich for exploitation uses, like I mentioned in another comment here. Even if you assume the NSA-or-local-gov-equivalent isn’t interested in you, J Random ransomware group is just successfully pretending to be a law enforcement agency with a subpoena away (which, as we discovered this year, most orgs are doing very little to prevent) from vscode-remote-instance crash data from servers people were SSH’d into. Paths recorded in backtraces tend to have usernames, server names, etc.

                          “This data collected about me is harmless” speaks more to a lack of imagination than to the safety of data about you or your organization’s equipment.

                      2. 4

                        That point is irrelevant, since it’s impossible to prove that microsoft is NOT misusing it now and that they will NOT misuse it in the future.

                        1. 3

                          No, so should we blindly trust Microsoft with our data, or be cautious?

                        1. 58

                          I was increasingly getting upset about their extension marketplace, where there is an increased number of extensions starting to sell pro versions of the extensions we used for free.

                          This strikes me as a bit entitled. There’s a lot of work that goes into an extension like Gitlens, those developers shouldn’t be expected to work for free. No-one’s making anyone pay for anything, and it is an extension marketplace, after all.

                          1. 18

                            It seems to me that this is one of the worst things to have come out of the era of App Stores and generalised open source access. At one point folks sometimes put cool hacks online. Lots of people now expect that these cool hacks be productised, have nice, informative READMEs and screenshots on their homepage, prompt support and fixes to major bugs, helpful authors who take time to involve the community in major decisions. Basically the kind of stuff that commercial vendors do with commercial software. But without paying commercial software fees.

                            That’s not open source ethics, that’s charity. I’m cool with people asking for charity but shaming people who don’t exclusively offer it, and when they also offer it, they don’t do it in the exact form that’s expected, is a little nasty. And I’m saying this with all the empathy and love of someone who used to save up for months to buy programming books 20+ years ago.

                          1. 45

                            TL;DR: They’re sticking with Rails because it’s what they’re already using, and it’s not causing them enough pain to change it.

                            The same reason any business sticks with any technology.

                            This is a sponsored post at thenewstack.io, and it’s content-light, so it’s borderline spam here.

                            Offtopic: that’s a bit of an unfortunate stock photo they went with. Looked like a bunch of blood-stained hands to me, at first glance.

                            1. 6

                              Offtopic: that’s a bit of an unfortunate stock photo they went with. Looked like a bunch of blood-stained hands to me, at first glance.

                              Agreed.

                              On-topic, I’m not sure Rails works for them considering the stability issues they’ve been seeing and last year they literally stopped working on any features to have all hands on deck to work on those stability problems.

                              1. 7

                                I don’t think that’s a Rails issue though, there are many companies that have managed to scale with Rails.

                                1. 3

                                  I’d ask how many have also failed to scale with Rails? The dynamic nature of the language has to make larger codebases a giant pain to work in.

                                  1. 8

                                    How many fail to scale with a static language?

                                    Anecdotally I once worked for a company that ran out of runway because we couldn’t fundamentally change how everything worked fast enough to respond to constantly changing customer signals in order to find product fit. The implementation language was Scala.

                                    Does this mean static languages “can’t scale” because they’re too constraining for early phase startups? People will make that argument.

                                    But honestly “scaling” is 99% about blundering into market fit and getting lucky – to the extent that tools can help you with that, it’s only by being familiar enough to you that they don’t get in your way.

                                    Companies that don’t manage to scale tell us nothing in particular about the tools they chose (for that matter, companies that do scale tell us nothing in particular about the tools they chose other than serving as an existence proof that X tool can scale).

                                    1. 6

                                      The empirical evidence, at this point in our industry’s history, is that there are a large number of acceptable tech stacks for doing web stuff at sub-(Google, Facebook, etc.) scale. Among which are Ruby and Rails.

                                      And notice that I say “acceptable” and not “good”, let alone “perfect” — every language, framework, design pattern, typing discipline, or what-have-you has its own set of quirks, tradeoffs and pain points, and exactly zero of them “scale” effortlessly in terms of either traffic or team size. Success, such as it is, is mostly a matter of learning to handle the quirks/tradeoffs/pain points of the particular stack you’ve chosen, because migrating to a different one is largely just exchanging one set of issues for a different set of issues, combined with the enormous cost of doing the migration.

                                      Thus you can find people who failed to “scale” with Rails. And others who failed to “scale” with whatever tech stack you prefer. This does not generalize usefully to objective “does/does not scale” statements about them.

                                      1. 5

                                        Anecdata point:

                                        I’ve worked with a large Rails codebase (100s of thousands of Ruby SLOC). The pains and problems with it weren’t due to the dynamic nature of Ruby, nor even using Rails itself (which I’ve actually never really liked). The things that needed dealing with were more to do with the line count and head count, and I’d expect many of those things would have been encountered if they used another language and/or another framework, but still grew to the same size.

                                    2. 4

                                      Feel free to create a stable alternative. Please don’t let these mere mortals hold you back.

                                      1. 5

                                        Your message is devoid of content. Please express what you’re trying to imply more directly.

                                        1. 4

                                          I believe he is saying that talk is cheap.

                                          1. 6

                                            That would be a strange thing to say here in the comments section of lobste.rs, since we’re all here to talk?

                                    3. 6

                                      TL;DR: They’re sticking with Rails because it’s what they’re already using, and it’s not causing them enough pain to change it.

                                      I think a better TLDR is: they find it approachable as PHP but without PHP’s oddities. And they don’t need microservices as they have huge drawbacks and don’t reduce complexity per se.

                                      I think the remarks about microservices are spot on.

                                      1. 3

                                        Not sure if your use of present tense was intentional but I found it a little odd. Yes, PHP is very much in widespread use (and from what I hear better than ever) but rewrites targeting PHP (coming from a different language) have always been rare since… very long.

                                        So I guess rewriting in PHP is just as much out as rewriting in Python - the only things that seem to be en vogue would be Go or Rust…

                                    1. 1
                                      let sum = https://raw.githubusercontent.com/Gabriella439/grace/main/prelude/natural/sum.ffg
                                      in  sum (List/map (\student -> student."Grade") ./students.ffg)
                                      

                                      (Understanding that this part in particular doesn’t yet exist,)

                                      This reminds me of deno; I think that’s awesome, and hadn’t ever considered importing static data, rather than code, and the URI as a literal in the language is fascinating to think about.

                                      This also fits really neatly into a model of lab notebook I’ve been thinking about for a long time. something like this embedded in a tiddlywiki, for example, being able to easily import other nodes as data by URI, is incredibly intriguing…

                                      Kudos!! This is awesome, thank you for sharing!

                                      1. 4

                                        And it reminds me of Dhall 😉

                                        1. 3

                                          Thank you for pointing me that direction, I hadn’t ever looked into Dhall, thinking it was a config language like TOML is a config language, not realizing what it was!

                                          The use case I have in mind is some kind of node in a computation graph that’s embedded into a wiki/notebook style application. The zettlekasten stuff that was popular end of last year is really interesting to me, but I’m less interested in finding links between notes than being able to surface computed data.

                                          eg, imagine a node in Evernote that showed your private gitlab repos, sorted by last commit, and also linked to an internal wiki page on the project; something like Grace autogenerating UI from an expression that can reference data by URI and be total would let that be an internal construction, rather than widgets built with some kind of extension mechanism.

                                          1. 2

                                            I hadn’t ever looked into Dhall, thinking it was a config language like TOML is a config language, not realizing what it was!

                                            Would you mind sharing your insight?

                                            1. 3

                                              Sure!

                                              I had first seen Dhall mentioned a couple years ago around the sime time that people were arguing about using TOML vs. YAML, and that put me into a place where I had conflated the two, but they’re very different beasts.

                                              Where TOML is a “file format”, like YAML or JSON, Dhall (and Grace) are functional programming languages.

                                              The confusion on my part is that because they are pure, and total, they cannot be general: they (purposefully) cannot represent programs that run indefinitely without halting, nor programs that have arbitrary side-effects.

                                              But for describing data transformations (and configuration, which is the Dhall raison d’être), it’s perfect; you get to express data transforms in a language that’s not just copy/paste, with type checking, and the host program gets verifiably correct transformed data.

                                              In the context of a wiki-style computation graph, if you know that the computation in a node is pure and total, you can re-evaluate portions of the graph arbitrarily when dependencies change, without worrying about non-idempotent side-effects.

                                              1. 2

                                                Thank you for elaborating :)

                                          2. 3

                                            The author created Dhall, which would explain some of the similarities!

                                        1. 2

                                          I’m likely going to have to implement soft-delete quite soon, so I’d like to hear people’s thoughts on the approach in this post? I find it quite attractive to push the logic down to the database and allow the application code to think it’s performing real deletes, rather than having to make everything in the app layer aware of soft deletes.

                                            1. 1

                                              https://wiki.postgresql.org/wiki/Don't_Do_This#Don.27t_use_rules

                                              I’m a little torn on this. The wiki says not to use them because

                                              Rules are incredibly powerful, but they don’t do what they look like they do. They look like they’re some conditional logic, but they actually rewrite a query to modify it or add additional queries to it.

                                              But in this case it’s quite clear that rewriting the query is the purpose of using the rule. And the blog linked from the wiki seems kinda like one persons opinion, and people in the comments don’t necessarily agree with them.

                                            2. 2

                                              If only there was a way to do this reliably on the RDMS without breaking acid principles and not tied to a framework or library of some client language and without resourcing to hacky tricks…

                                              Just implement soft delete manually by creating the appropriate data model for it and put the logic in a stored procedure. You can even use the same logic for multiple tables, just by following simple naming conventions.

                                              If performance and data size allows for it, you can even implement a delete_row(table, id) procedure that saves the serialized rows on a single deleted table.

                                              1. 1

                                                If performance and data size allows for it, you can even implement a delete_row(table, id) procedure that saves the serialized rows on a single deleted table.

                                                So rather than marking rows as deleted and then filtering them out in future queries, I’d create a new table that stored a serialized copy of the original row, and then do a real delete on the target table? Then if I want to undelete I need to restore from the deletion table? It seems like this would run into a problem with cascading deletes? Or would I want a before-delete trigger on every table, so that as the delete cascades the affected rows are copied first?

                                                1. 1

                                                  I didn’t specify any specific strategy. My point was that you can write callable stored procedure and implement whatever logic you want. This has been done for decades. While articles like this make it sound like soft deletes are possible because of a bunch of tricks with rules or triggers.

                                                  Other than that, I don’t understand understand what specific problem with cascade deletes arise from doing it from within a procedure versus doing it with tricks like the one in this post. If you want to undo a cascaded deletion, you have to know the chain and do it in the right order regardless of what method you use.

                                                  1. 1

                                                    I didn’t specify any specific strategy.

                                                    If performance and data size allows for it, you can even implement a delete_row(table, id) procedure that saves the serialized rows on a single deleted table.

                                                    I thought that this delete_row procedure was a specific strategy, so I was trying to ask about that. No worries, though, I started with that idea and I think I have a simple solution that will work for me. Thanks for the pointer!

                                                    Other than that, I don’t understand understand what specific problem with cascade deletes arise from doing it from within a procedure versus doing it with tricks like the one in this post. If you want to undo a cascaded deletion, you have to know the chain and do it in the right order regardless of what method you use.

                                                    Thanks, I understand this better now. I’m leaving “how to undelete” as a problem to solve separately from “how to soft delete”, and just making sure I am storing enough data about the deletions that it will be possible to do, even if I don’t have code for the process today.

                                            1. 10

                                              It’s great to see ESM support coming along (my word is this stuff confusing, I hope ESM settles it all… eventually) but man there are a lot of file extensions going around these days!

                                              • .js: javascript
                                              • .ts: typescript
                                              • .d.ts: typescript definitions
                                              • .jsx: javascript with React syntax
                                              • .tsx: typescript with React syntax (sounds like there’s no difference with .ts as of this new Typescript version, so we maybe this one will go away eventually)
                                              • .mjs: an ESM javascript file
                                              • .cjs: A CommonJS javascript file
                                              • .mts: an ESM typescript file
                                              • .cts: A CommonJS typescript file
                                              • .d.mts: ESM definitions
                                              • .d.cts: CommonJS definitions

                                              It makes sense why all these things are needed. I don’t know how many of them a single project would have to deal with (not many if starting with Typescript, I think).

                                              1. 5

                                                Totally. And then the module formats of .js and .ts files are anybody’s guess without peeking inside.

                                              1. 6

                                                The experience of using GatsbyJS is a nightmare. Errors are truly impossible to debug. Plug-in documentation is always lacking to the point where I have to read the source. All the integration points are unbelievably complicated. GraphQL is a terrible abstraction layer for a small personal site. It’s lots of great ideas in theory but absolutely awful in practice.

                                                I’m waiting for Astro to be done enough for me to rebuild my site in it so I can deliver a near-zero JS site (other than the theme switcher).

                                                I haven’t liked a static site generator since Metalsmith.

                                                1. 2

                                                  Is metalsmith not an option for you? I’m curious to know why.

                                                  1. 1

                                                    I hadn’t heard of metalsmith until this comment. It looks really cool! If I ever need a js static site generator I’ll give it a serious look.

                                                    1. 1

                                                      When a static site generator stops being actively maintained you can definitely keep using it but you have to freeze everything from Node.JS to all your dependencies. That’s extremely fine and not a problem at all except I’m a web developer by trade so I want to be trying new things all the time.

                                                      1. 1

                                                        Oh wow I spoke too soon, after 5 years it’s back with a maintainer: https://metalsmith.io/news/2022-01-27/metalsmith-is-back/

                                                    2. 1

                                                      I’m really enjoying Nikola, but I’m a Python guy so that goes a long way :)

                                                      Metalsmith looks really great. If it ain’t broke, don’t fix it! :)

                                                      1. 1

                                                        What do you need from Astro that’s blocking you from moving to it? I just moved my personal site from NextJS to Astro and I love it. But my site is very simple, just some MDX and React rendered to a blog.

                                                        1. 1

                                                          I hit some blocking issues last I tried a few months ago, eg https://github.com/withastro/astro/issues/2225. Looks like that one might not be fixed.

                                                          I also REALLY want to be able to collocate images with the mdx file that uses it but this isn’t a deal breaker.

                                                          It’s almost there for me.

                                                          1. 2

                                                            I hit some blocking issues last I tried a few months ago, eg https://github.com/withastro/astro/issues/2225. Looks like that one might not be fixed.

                                                            Ahh, my styles are in a plain external stylesheet, so I haven’t run into this problem.

                                                            I also REALLY want to be able to collocate images with the mdx file that uses it but this isn’t a deal breaker.

                                                            Yeah, I had that pattern in my previous site. With Astro I just adopted a naming convention so that it’s easy to tell which images go with each blog post, given they can’t be co-located. I don’t have many posts, or images, so for me it was an easy change.

                                                            1. 1

                                                              Yeah same. The collocation issue didn’t get ported to the RFC process so we’re unlikely to get it: https://github.com/withastro/astro/issues/1618

                                                        2. 1

                                                          I think writing the integration code ourself can make debugging a lot easier. I have written a guide on only using vite to statically generate a website: https://github.com/taowen/vite-howto/tree/main/packages/SSR/generate-static-website

                                                        1. 10

                                                          I’ve spent the last four weeks banging my head against terraform and AWS and agree that this could be simpler, but also hop and skip to work with a smile every day because it doesn’t involve yaml anywhere.

                                                          1. 7

                                                            Last week I spent two days chasing why the terraform setup I was using crashed each time saying it was missing a file in the bucket the same off-the-shelf module was creating. Finally gave up and asked the authors what I was doing wrong.

                                                            Answer: not wrong, this is intended behaviour. I should run terraform until the crash, then manually go add the file it says is missing, then run terraform again.

                                                            Working around that, terraform provisioned an AWS Airflow cluster in a “failed” state. AWS error message says, paraphrasing, it “may be an IAM issue, or maybe something with networking, who knows!”

                                                            1. 1

                                                              this sounds like an absolutely delightful experience. thoughts and prayers.

                                                              tbh experiences like this is what cemented my belief that aws should be easy. in fact, it must be easy.

                                                            2. 1

                                                              the people have spoken, yaml all the things. i must agree with you though. a proper typed lang with editor completion is so much better.

                                                              use the go api and define your infraset as a struct instead of yaml and the cli!

                                                              hop and skip driven development.

                                                              1. 5

                                                                a proper typed lang with editor completion is so much better.

                                                                I wish Dhall was used more for this sort of thing, esp since it supports JSON & YAML outputs.

                                                                1. 2

                                                                  it’s not a bad idea. cuelang also looks promising. sadly none of these have really taken off yet.

                                                                  making things easier probably shouldn’t start by making them harder.

                                                                2. 1

                                                                  a proper typed lang with editor completion is so much better.

                                                                  Pulumi provides this, in Python, Typescript, and Go (off the top of my head - Typescript for sure)

                                                                  1. 1

                                                                    true! this a good idea. yet all the iac providers are just-another-cloudformation-tm. the value add from cf to any other the others is low, and any choice among them is fine. cdk has a pulumi inspired interface now as well.

                                                                3. 1

                                                                  i also think that data schema needs to be so simple that you can actually remember it. simpler types, repeated patterns, etc. show me your yaml-schema/go-structs and i won’t have to wade through your code.

                                                                1. 3

                                                                  I like this. Using something like Jira it’s far too easy to end up in a situation where the top of the “ready” column/backlog is blocked by stuff that are still in progress.

                                                                  1. 13

                                                                    I’ve always found it weird that Jira has “blocked by” as a built-in concept but then doesn’t make much use of it. Mostly the only way to see if a ticket blocks or is blocked is to go into the full ticket view. The compact views (the ones you usually use in planning mode) don’t provide any indication. It especially stands out to me because the core of Jira is a user-definable state machine, and “blocked” should be fully-integrated into that, but it’s not. And I don’t think there’s any option to make it more visible.

                                                                    1. 3

                                                                      We even had to use the Automation features to email people when a blocking issue was resolved. This is definitely a place where Jira’s infinitely flexible workflow system is not as good as something that would be a bit more opinionated.

                                                                  1. 19

                                                                    Scrollbars on Linux and Windows 11 won’t take space by default.

                                                                    Can we please stop this (ever-continuing) trend? I originally thought auto-hiding scrollbars were a cool design trick until I realized just how much a scrollbar adds to UX: it’s a permanently visible representation of how big a document is, and how far along in it I am.

                                                                    Another release, yet more Firefox UI/UX changes seemingly just for the sake of change (which I guess is also the state of the modern web, in many ways, so it’s somewhat fitting).

                                                                    1. 11

                                                                      What I find interesting is they admit it harms accessibility - that’s why you can turn it back on under accessibility options.

                                                                      Why do we find it acceptable for gratuitous inaccessibility by default instead of vice versa?

                                                                      1. 1

                                                                        they admit it harms accessibility

                                                                        Here, “they” is Windows, not Firefox. It is Windows that categorized scrollbar visibility as an accessibility option.

                                                                        On Windows, Firefox follows the system setting (System Settings > Accessibility > Visual Effects > Always show scrollbars).

                                                                      2. 6

                                                                        I’m not sure about Windows/Linux, but on macOS you can just rest two fingers on the trackpad to make the scrollbar in the current app visible. You don’t have to scroll the app, just rest your fingers on the trackpad.

                                                                        And when you connect a non-Apple mouse to a Mac, the scrollbars become permanently visible by default.

                                                                        Because of this, I haven’t found auto-hiding scrollbars to be a usability issue at all.

                                                                        1. 5

                                                                          On Mac you can also toggle scroll bars back on for all apps in the system preferences, which is what I do

                                                                          1. 2

                                                                            You don’t have to scroll the app, just rest your fingers on the trackpad.

                                                                            Seems to be app specific, because this works in Firefox 100 but not in Chrome 101.

                                                                        1. 2

                                                                          Is Podman a complement to Kubernetes or ECS, or a replacement? Or both? I’ve been trying to wrap my head around the container ecosystem as the current jack-of-all-trades at a small company. Kubernetes intimidates me, and all the AWS alternatives (ECS, Beanstalk, Fargate) mean I have to read AWS documentation 😆

                                                                          1. 9

                                                                            It is a reimplementation of Docker, with a cleaner design. It is mostly developed and maintained by Red Hat rather than Docker Inc.

                                                                            1. 1

                                                                              Got it - thanks for explaining that

                                                                            2. 2

                                                                              @vosper you might also find the thread:

                                                                              https://www.mail-archive.com/users@dragonflybsd.org/msg05686.html

                                                                              to be informative (although it touches on areas outside of linux-only ecosystem)

                                                                            1. 9

                                                                              The second I saw this, I thought: maybe this will be the kick the TS team has needed to try to make their error messages more like Rust’s and Elm’s. I’ve been helping a bunch of folks on my team learn TS lately as we are migrating all of LinkedIn’s internal JS infrastructure code and foundational parts of the flagship app to TypeScript and… the complaints about and confusion about the error messages have not been few!

                                                                              1. 3

                                                                                the complaints about and confusion about the error messages have not been few

                                                                                When writing some “gnarly” types in TypeScript I’ve generally relied more on my intuition and knowledge of how “types work” rather than the error messages from tsc. But no one on my team uses the “advanced” features of TypeScript’s type system, so they generally don’t encounter bad error messages.

                                                                                1. 3

                                                                                  Yeah, we’re an infra team working on converting a bunch of infra code… and infrastructure libraries tend to be the nastiest code for this kind of thing, for two reasons (likely apparent to you, perhaps less so to others reading):

                                                                                  1. It’s library code, and library code which tries to present a nice interface to end users often has to jump through some hoops to make that happen. This TypeScript Congress talk (not publicly available yet; it was an online conference) by Mark Erikson (Redux maintainer) covers a bunch of why that tends to be. The short version is: more generics, more unions, more other fancy type shenanigans.

                                                                                  2. It deals with backwards compatibility much more than app code does, which means it has to jump through many more hoops, which again leads to use of overloads, generics, conditional types, etc.

                                                                                  App code can usually be more or less “write types for function inputs and outputs and object types and everything else just works”, so I am fairly confident folks working in the app itself will hit a lot less of this.

                                                                                  1. 2

                                                                                    It’s library code, and library code which tries to present a nice interface to end users often has to jump through some hoops to make that happen.

                                                                                    This is definitely where I’ve seen the most complexity with TS. It’d be a big win if the errors could be improved, because oftentimes the alternative of looking at library type definitions can be overwhelming (at least for me, a novice Typescripter). I always think “there’s a learning opportunity here” but then I also just want to fix my error and move on, so it can be hard to balance.

                                                                              1. 2

                                                                                The only thing I’ve learned to help me understand the giant and impenetrable TS errors is to read the last part first. Often there’s a straightforward clue there. I’d love to see the errors improved.

                                                                                1. 8

                                                                                  Article has some points. Has someone good examples on software projects that ship their own selinux policies?

                                                                                  1. 7

                                                                                    Very few, because selinux is more of a system-wide approach, so app-on-Fedora may have a different settings than app-on-Ubuntu, etc. Even docker doesn’t ship its own and that would be closest qualifying one I think.

                                                                                    Apparmor policies are more common to be included.

                                                                                    1. 2

                                                                                      I think MySQL does

                                                                                    1. 2

                                                                                      The test stuff reminds me a lot of Mocha. Is the hope that people building applications with Node will standardize on this? Or is it mostly there for testing Node itself?

                                                                                      1. 33

                                                                                        A title describing the same problem from a different angle would be “The mess we’ve gotten ourselves into with single-page-applications

                                                                                        1. 6

                                                                                          How about “The proliferation of JavaScript and our failure to prevent servers from acquiring vast stockpiles of such code

                                                                                          1. 4

                                                                                            Can you elaborate? Classic SPAs don’t have this problem because all their functions are “client colored” (to borrow the terminology of the post).

                                                                                            1. 7

                                                                                              I guess the answer is that Classic SPAs are good until you need some SEO which is probably very common. Hence SSR. Although technically speaking SPA per se don’t need SSR (maybe for performance but shouldn’t be an issue if things were developped correctly by default I’d say).

                                                                                              1. 15

                                                                                                I was thinking the same thing. The title could easily be “The mess spawned by organizing the web economy around a search monopoly”.

                                                                                                1. 9

                                                                                                  IMO, this is the wrong intuition. Categorically, pages that need to be SEO-opitimized are those that are informational. You don’t need SEO for a desktop app, nor would you need that a web app because a web app is the same thing but distributed through the browser (but sandboxed, not making users require a download executables, and on a highly portable platform available on almost every OS and architecture). These two concepts are not the same thing despite both being delivered through the browser; you shouldn’t use a SPAs tech stack for a basic page because information category pages don’t require the same shared state management and usage of device feature APIs that an application might. I can use Wikipedia from a TUI browser because it’s 95% information. It was the exact same issue in the Flash days of not using the right tech and society has permanently lost some content from its internet archive.

                                                                                                  So it’s not “when you need SEO”, but SEO should be a requirement from the get-go in helping you choose a static site or dynamic, multipage application where the server always did the rendering.

                                                                                                  The problem is the tooling. The NPM community instead of having an intuition about the right tool for the job and stating “do not use this tool for your static content”, we have tools that try to solve everything and hide the mountains of complexity that should have scared devs away from the complex solution into the simple one. It should be hard to make a complex pipeline like that of server-side rendering for a SPA. And that easy tooling is riddled with bugs, megabytes of node_modules, and might invite you to start involving more complexity with tech such as ‘cloud workers’, but people don’t find out until they are way too deep in the Kool-Aid. Many don’t seem to see this issue because influencers are pushing this stuff to get GitHub stars and have, ironically, gotten all of the top ranks when searching for a solution (or people were asking the wrong questions without knowing).

                                                                                                2. 3

                                                                                                  Not the poster you’re responding to but it might be because SSR is a fairly natural leap from SPA-style apps. They might also be implying that it’s my fault, which would be nice, but unfortunately isn’t the case.