1. 2

    Well no, he’s not using any algorithms to paint, shadertoy-style. What a letdown.

      1. -1

        FYI I am flagging this as the full drewdevault.com domain was banned from lobste.rs. I expect the admins to remove your post or ban you for posting those blog posts.

        1. 5

          Flagging it as what? None of the categories is appropriate. I find your desire for some post to be deleted or a person to be banned for posting something relevant appalling.

          1. 4

            I wouldn’t expect a ban if someone didn’t know. It’s probably confusing to post something from someone who can’t respond, however.

            1. 3

              Why was it banned? I don’t see a mention of the ban anywhere in the rules…

              1. 3

                Among the top 3 results here

                https://lobste.rs/moderations?moderator=pushcx&what%5Bdomains%5D=domains

                However it just refers to top-level submissions, links in comments are fine.

              2. 1

                Sorry.

                I knew this domain was banned for submissions, but I thought links to specific (on topic) posts are OK.

                Btw: I just upvoted your explanation because I appreciate it :)

                1. 0

                  Just to be clear I am being annoying on purpose because I don’t like the fact that the admin bans the entire domain on a power trip for no reason at all just because he has a personal problem with the author (who isn’t even the one posting the links to that domain on lobste.rs). I thought that this website is different and community moderated but sadly it isn’t and the moderation suffers from all the same problems as everywhere else. I don’t actually see anything wrong with the domain, its content or your comment. I just don’t like the admin who is being petty and if he wants to act like this then he should manage his own RSS subscription and not this website.

            1. 7

              Software is eating the world but a lot of it is, in the end, completely inconsequential. I’m willing to bet people would lead far more fulfilling and less stressful lives if they deleted most of the apps they had on their phones.

              If most people lead more stressful lives because of the apps they have on their phone, then software is not “completely inconsequential”, quite the opposite.

              1. 2

                It’s “inconsequential” from the “collapse of civilization” perspective; no civilization is going to collapse if King stops being able to maintain Candy Crush.

              1. 4

                Bitcoin was always meant to be private, it’s just that the project stopped evolving since ~2016 (that’s the last time miners voted on a BIP afaik) and now we’re at this sad state where we have to pay 10$ to send one transaction.

                There’s no point responding about the negatives of proof of work schemes - that has been discussed countless times now. Hopefully we’ll have some alternative with a decent user base actually work this year (looking at you, eth 2.0).

                1. 8

                  the negatives of proof of work schemes

                  I think proof-of-wastingenergy is pretty bad too, but if it is used it should at least be like Monero’s RandomX which tries to discourage people from creating dedicated mining hardware.

                  1. 6

                    BTC was never meant to be private… what are you talking about? The whole point is a public ledger that can be audited. Despite its other issues, this is one area it excels in. Everyone know who has what at any given time. Thats the point.

                  1. 5

                    This is how I noticed that 1and1, a huge company, sold my email to spammers. Or worse, their database has been leaked. Needless to say, they acted like I reused that unique address somewhere else. You’ll be glad to be able to blacklist that address when it happens.

                    1. 14

                      TL;DW: Avoid putting if blocks in many places inside a function. Instead:

                      1. Use preconditions at the start of the function (if(!thing) return ...).
                      2. Use switch blocks to handle all possible cases, assuming your language doesn’t compile when you’re missing cases.

                      I lied about the second point. The presenter instead argues for polymorphism, which is more bug-prone than anything (the third point exists to avoid polymorphism-specific bugs).

                      1. 2

                        The same effect can be had using polymorphism and abstract methods, where the language supports compile-time detection abstract. That works better than switch in many cases IMO, but not all. Most importantly, polymorphism and abstract groups implementations by category in the source, while switch groups them by action.

                        I looked at some client code now that issues nine types of server requests. Each request can fail in two ways, can succeed, and can become unnecessary. Using abstract methods mean that each of the nine request subclasses has to implement both kinds of failure or else the compiler will stop everything, and that the implementations for the two kinds are together.

                        1. 2

                          Strange, for me the page works but not the archive. I have made a new snapshot on the web archive, just in case: https://web.archive.org/web/20201218113458/https://www.reuters.com/article/us-usa-cyber-breach-idUSKBN28R2ZJ

                        1. 23

                          I feel Hitchens’s razor applies here: “what can be asserted without evidence can also be dismissed without evidence.” There is almost no information in this thing. Hell, it doesn’t even say what this “keystone” thing is, it’s just a vague anecdote from a person unknown.

                          It doesn’t really pass a sniff test: some process is running but activity monitor doesn’t show anything? And removing chrome doesn’t fix it, and there’s still a mysterious undetectable process siphoning CPU to do … what exactly? From what I can gather it’s just some autoupdater tool, why would this intentionally use a lot of CPU? It just doesn’t make any sense.

                          But yeah, “Google bad”, right? I wish folks would stop jumping the gun every vague conspiratorial story about these sort of companies out there. Yeah, I don’t care much for Google either, but that doesn’t mean they’re secretly installing rootkits to make your macs slow for dark mysterious reasons.

                          I just flagged it as off-topic, because there is no actual content about computing there.

                          1. 3

                            From what I can gather it’s just some autoupdater tool, why would this intentionally use a lot of CPU?

                            If the autoupdater were Windows Update, you wouldn’t have asked this question ;)

                          1. 7

                            For fairness, we should find some way to include Dream’s perspective.

                            My perspective on his perspective is that he goes through a lot of handwaving and psychological arguments to explain his situation. The speedrun team’s paper has a basic statistical argument which convinces me that something is unexplained, but I don’t feel like Dream has an explanation. But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                            In a relative rarity for commonly-run games, the Minecraft speedrunning community allows many modifications to clients. It complicates affairs that Dream and many other runners routinely use these community-approved modifications.

                            1. 5

                              But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                              This is the argument that always confuses me. At the end of the day, Minecraft is just some code running on someone else’s computer. Recorded behavior of this code is extremely different from what it should be. There are about a billion ways he could have modified the RNG, even live on stream with logfiles to show for it.

                              1. 1

                                I like to take a scientific stance when these sorts of controversies arise. When we don’t know how somebody cheated, but strongly suspect that their runs are not legitimate, then we should not immediately pass judgement, but work to find a deeper understanding of both the runner and the game. In the two most infamous cheating controversies in the wider speedrunning community, part of the resolution involved gaining deeper knowledge about how the games in question operated.

                              2. 3

                                But without a clear mechanism for how cheating was accomplished

                                Are you asking for a proof of concept of how to patch a minecraft executable or mod to get lucky like Dream was?

                                1. 3

                                  Here’s one:

                                  • open the minecraft 1.16.4.jar in your choice of archive program
                                  • go to /data/minecraft/loot_tables/gameplay/piglin_bartering.json
                                  • increase the weight of the ender pearl trade
                                  • delete META_INF like in the good old days (it contains a checksum)
                                  • save the archive

                                  Anyone as familiar with Minecraft as dream would know how to do this.

                                2. 2

                                  But without a clear mechanism for how cheating was accomplished, it’s premature to conclude anything.

                                  We have a clear mechanism : he modded his game. That’s because when he was asked for game logs, he deleted them. Just from the odds alone, he is 100.00000000% guilty.

                                  1. 3

                                    As the original paper and video explain, Minecraft’s speedrunning community does not consider modified game clients to be automatically cheating. Rather, the nature of the precise modifications used are what determine cheaters.

                                    While Dream did admit to destroying logs, he did also submit supporting files for his run. Examining community verification standards for Minecraft speedruns, it does not seem like he failed to follow community expectations. It is common for speedrunning communities to know about possible high-relability verification techniques, like input captures, but to also not require them. Verification is just as much about social expectations as about technical choices.

                                    From the odds alone, Dream’s runs are probably illegitimate, sure, but we must refuse to be 100% certain, due to Cromwell’s Rule; if we are completely certain, then there’s no point in investigating or learning more. From the paper, the correct probability to take away is 13 nines of certainty, which is a relatively high amount of certainty. And crucially, this is the probability that our understanding of the situation is incomplete, not the probability that he cheated.

                                    1. 4

                                      But you said there’s no clear mechanism for how cheating was accomplished. Changing the probability tables through mods is a fairly clear and simple mechanism isn’t it?

                                1. 2

                                  This just moves the risk from Cloudflare to Cloudflare’s partners. Will they sell query logs?

                                  1. 2

                                    I mean, that depends.

                                    Cloudflare isn’t the only DoH player in the game (https://dnscrypt.info/public-servers/ contains DoH and DNSCrypt supporting servers), and the source to their proxy is released. You could set up a community DoH proxy that proxies to Quad9 and offer it to a bunch of folks. Your queries wouldn’t touch Cloudflare in this case.

                                  1. 13

                                    I didn’t write the blog post and I didn’t write the code, but I’m in that same team. Happy to answer (or redirect) any questions you might have :)

                                    1. 3

                                      To confirm, this replaces the need to use HTTPS Everywhere?

                                      1. 3

                                        It works a bit different. HTTPS Everywhere in default mode is a bit less progressive and just updates for a list of pre defined websites. It’s similar to a stricter mode of HTTPS Everywhere though.

                                      2. 3

                                        When will this be enabled by default, and when will http: requests be complete denied? I’m asking not because I want to see that, but because I’m afraid of that happening and shutting me out of certain websites entirely.

                                        1. 9

                                          I can’t see that happen at all. Too many requests are still HTTP. You are not alone :)

                                          1. 8

                                            For the small number of websites that don’t yet support HTTPS, Firefox will display an error message that explains the security risk and asks you whether or not you want to connect to the website using HTTP.

                                            We can bypass improper HTTPS (expired, self-signed, wrong domain name) errors. Why do you think we would not be able to bypass this new error?

                                            1. 4

                                              Based on Mozilla’s history, in a future version, this setting will be enabled by default and only accessible through about:config, like JavaScript.

                                              1. 3

                                                Unfortunately the programmers behind web browsers are known for pulling stuff like this. The last time I tried to use websockets and my webcam over an insecure connection for testing purposes I quickly realized that they really hate their users as disabling some of the security options is just not possible.

                                              2. 1

                                                Thank you for adding this mode. I’m not sure whether we can turn it on at work but it’s great to have it available.

                                              1. 1

                                                TL;DR: The conflict resolution algorithm seems to be app-specific code that runs in the database. I didn’t get more details, documentation is unclear.

                                                Pricing is ridiculous.

                                                1. 3

                                                  What would you price this at? It looks high for my company’s current scale [and at this point we want to own the whole stack anyways], but an earlier Notion might have found this offering attractive.

                                                  1. 2

                                                    I realize you’re being derisive, but in a sense, yeah:

                                                    The fact that conflict resolution is handled by running normal, arbitrary functions serially against the database on client and server is the point. Other systems either restrict you to specialized data structures that can always merge (e.g., Realm), or force you to write out-of-band conflict resolution code that is difficult to reason about (e.g., Couchbase). In Replicache you use a normal transactional database on the server, and a plain old key/value store on the client. You modify these stores by writing code that feels basically the same as what you’d write if you weren’t in an offline-first system. It’s a feature.

                                                    ===

                                                    TL;DR: Replicache is a versioned cache you embed on the client side. Conflict resolution happens by forking the cache and replaying transactions against newer versions.

                                                    When a transaction commits on the client, Replicache adds a new entry to the history, and the entry is annotated with the name of the mutation and its arguments (as JSON).

                                                    During sync, Replicache forks the cache and sends pending requests to your server, where they are handled basically like normal REST requests by your backend. You have to defensively handle mutations server-side (but you were probably already doing that!). Replicache then fetches the latest canonical state of the data from your server, computes a delta from the fork point, applies the delta, and replays any still pending mutations atop the new canonical state. Then the fork is atomically revealed, the UI re-renders, and the no-longer needed data is collected.

                                                    It is not rocket science, but it is a very practical approach to this problem informed by years of experience.

                                                    As for the price, it’s weird. Teams that have struggled with this have basically no problem at all with the price, if anything they seem to think it’s too low.

                                                  1. 5

                                                    The link that should have been posted: https://ziglang.org/download/0.7.0/release-notes.html

                                                    1. 4

                                                      It’s worth pointing out that these release notes are incomplete (compared to the previous ones that were exhaustive).

                                                      The upcoming 0.7.1 should fix that: https://ziglang.org/download/0.7.0/release-notes.html#These-Release-Notes-are-Incomplete

                                                    1. 2

                                                      Idle thought: I would like a “web reboot” that is data-driven … Basically the client could measure how well the page is performing, and the servers would get feedback on it, and (in some fantasy world) the page authors would adjust to that.

                                                      For some color on that, I actually like images and videos on the web. And mathematical formulas, and SVG diagrams like Richard Hipp’s recent pikchr:

                                                      https://pikchr.org/home/doc/trunk/doc/examples.md

                                                      But I think it would be cool if there was a somewhat compatible but stripped down browser that enforces a network transfer time limit and rendering time limit of say one second.

                                                      And then it would send back a “reverse error” like a “HTTP 600” if that time is exceeded. It would just stop downloading or stop rendering.


                                                      The obvious problem is the incentives… Most people would just wait longer than 1 second. They want the free content, and get addicted to the free content, hence suffering through all the ads.

                                                      But I guess the end result is that you could browse https://www.oilshell.org/ with such a browser and I wouldn’t have to change my content :)

                                                      And for sure I would look in the logs to see how many people were sending back the “data transfer aborted” and “rendering aborted” codes …


                                                      I guess a different variation on this idea is if the client sends the codes to some kind of reputation service. (There is an obvious privacy problem there, but probably some mitigations too.)

                                                      And you could have a search engine that uses that as a signal … serve up pages with better latency.

                                                      In fact I thought Google at one point latency as a signal for ranking, but I find that hard to square with the state of the web right now … it must be a very weak signal. I guess the problem is that sometimes high quality content is on a terrible page. That seems to be how the economics have worked out.

                                                      So again this is basically a “soft migration” rather than a “web reboot”.

                                                      1. 1

                                                        Basically the client could measure how well the page is performing, and the servers would get feedback on it

                                                        That’s what Google is claiming to do when ranking websites. But it would downrank websites who typically bring in a lot of ad revenue, so they don’t actually do it :(

                                                        In a fantasy world where we had competition in the search engine space, website authors would probably adjust their pages, just like they do currently by creating lots of spam filler content for SEO.

                                                      1. 5

                                                        Fun fact: some websites (IGN from the top of my head) only show ads in readability mode. And just like cookie popup websites, it’s a good reminder that you should go someplace else.

                                                        1. 5

                                                          I have used mostly used hardware, but mostly Intel stuff is available, and Spectre vulnurability made used computer far less worthy. I’ll buy brand new AMD products when my computer feels unbearably slow for work (getting there gradually since the ever coming “fixes” for the Intel shortcuts. Visual Studio startup became unbearably slow on the same machine in the last 3 years.)

                                                          1. 2

                                                            I look forward to the current crop of T14s Thinkpads with AMD CPUs to become available on the refurbished market. That will be my upgrade.

                                                            1. 2

                                                              A friend got a T14 as a work notebook, and is pretty dissatisfied with it.

                                                              • bad build quality
                                                              • bad drivers/software support
                                                              • terrible light detection logic for adaptive screen backlight (lights up in a dark room, the sensor detects the screen light, thinks it is lighter in the environment, so it needs to increase the backlight… until it is at max )
                                                              1. 3

                                                                terrible light detection logic for adaptive screen backlight (lights up in a dark room, the sensor detects the screen light, thinks it is lighter in the environment, so it needs to increase the backlight… until it is at max )

                                                                I had an ancient Sony laptop that suffered from this exact problem! It is the position of the light sensor – which means it isn’t even easily fixable. That cycle is brutal and consistent.

                                                                1. 1

                                                                  That’s good to know. Is it an AMD model? I also aim at the T14s, the slimmer version.

                                                                  That’s also one thing I like and outlined in the article: waiting for refurbished models gives Linux distributions and Kernel devs time to fix these issues.

                                                                  1. 4

                                                                    Friend here. This is the most boring laptop I ever had. Except when the charger said that it was charging but the battery actually discharged. Of course you can’t remove the battery without voiding the warranty, so you have to “disconnect” it in the BIOS menu. Just buy a Latitude. I can answer specific questions if interested.

                                                                    1. 3

                                                                      Uh, I hate my work-provided latitude. Gets all puffed up and spiny if I even look like I’m gonna open IntelliJ. meanwhile my old bucket t420 doesn’t break a sweat.

                                                                      … though it is windows vs Linux so maybe it’s about that.

                                                                    2. 3

                                                                      Thinkpad x395 here, which is more or less the same boards as the T14s but with prior generation CPU. (T495s and x395 were the same).

                                                                      I’m very satisfied with my laptop, which I’m using full time. I don’t use Windows much (mainly Arch Linux), but when I do, I haven’t experienced bad drivers mentioned by the parent.

                                                                      No auto brightness adjustment on this model (that I am aware of).

                                                                2. 2

                                                                  re Spectre: Have you considered running multiple machines? It takes pretty sensitive equipment to pull information off powerline disturbances.

                                                                  1. 2

                                                                    On the work machine I need to access webpages (unless I want to re-type long passages of text sometimes), and the mitigations are needed when untrusted code is executed (that is webpages nowadays)

                                                                    1. 1

                                                                      I’m not sure I’d actually recommend it, but RDP / remote X11 let you run a web browser one one machine and display the output on another, with copy-and-paste working between the browser and the other stuff.

                                                                      1. 1

                                                                        Buying a brand new machine and licenses is cheaper than the lost productivity

                                                                        1. 1

                                                                          I think Synergy might be a better solution for this since it only shares the pointer and clipboard instead of having to send the entire browser window across the wire.

                                                                    2. 2

                                                                      Visual Studio’s startup getting slow is not because of Spectre, but because of the software itself. A “fix” that worked for me is unplugging all hard drives, including internal ones, and only keeping SSD/NVME drives connected. For some reason, Windows likes to wait on all drives even if they’re not at all related to the task at hand. Another is disabling windows defender, at every boot. It routinely uses 90% CPU.

                                                                      1. 2

                                                                        Thanks for the info, but meh… I’d rather not do that :( Defender is already disabled (mostly), as it is mostly just a resource hog or rather an additional attack vector (as many AV software are)

                                                                        This is pathetic, Microsoft should get their act together.

                                                                    1. 1

                                                                      I found this website from the following lobsters post: https://lobste.rs/s/5gcrxh/case_study_on_vanilla_web_development

                                                                      I always had trouble organizing my CSS, and I think it’s a pretty simple and concise way to do it.

                                                                          1. 2

                                                                            Making your own search engine comes with a lot of challenges:

                                                                            • There exists no open source web search engine. The best shot you have is using Lucene, diving deep to make it scale, add PageRank support, etc.
                                                                            • Crawling is impossible. Cloudflare blocks all (non-bigtech) crawlers. https://commoncrawl.org/ is a tiny dataset and just using wikipedia’s dump and crawling its external links would give you better results.
                                                                            • It’s still too costly if you just want to use it yourself - you’d have to make it a business, and at that point you really need to worry about scaling and remember, no open source solutions currently exist.
                                                                            1. 1

                                                                              I thought about that a few months ago. I came to the conclusion that the only way was to do a hybrid SE: meta search + collaborative.

                                                                              The meta part uses the API (or any privileged access) of some reference websites (Wikipedia, SO, official websites, …)

                                                                              The collaborative part is a web browser plug-in that reads any page you visit, build the inverse index and send it to the SE pipelines. The advantage is that you bypass any Cloudflare/captcha because you are a real human. The human is the crawler.
                                                                              Problem to be solved: privacy. How to anonymize data that reveals your browsing history?

                                                                              About the PageRank algorithm, let users decide what pages are relevant (through the plug-in) by voting. The plug-in may ask “Is this page relevant according to your terms: “Python” “socket” “hang””?

                                                                              I have no idea what the result would be. However, I’m sure it’d be pretty fun to run that.

                                                                              1. 1
                                                                                • That is why I am working on one

                                                                                • Cloudflare can not block every IPs, and people can spoof user agent, my project is to help people host their own search engine

                                                                                • Too costly at scale, not necessarly for a personnal search engine