1. 2

    Nice work.

    However, I don’t like the bar graphs on page 70. I thought: “Nice, over half the ROP gadgets are gone! But wait… the code size doubled? That can’t be right…” Only then I realized that the y-axis doesn’t start at zero, so I can’t estimate the relative change by just visually comparing the bars.

    1. 18

      I agree that Linus is sometimes overly rude and unnecessarily personal. On the other hand, one immediately knows how strongly Linus feels about what issues. This has value and is probably why many people are ready to defend this tone. But maybe there is a middle ground.

      So while I like Gary’s proposed version in general, I think it was toned down too much and could use more assertiveness/confidence. Small example:

      Original:

      I’m not talking about the changes themselves - I can live with them. But the rationale is pure and utter garbage, and dangerously so.

      Gary’s version:

      These changes look OK, but I’m not sure about the rationale.

      My attempt:

      I can accept the changes itself, but I absolutely disagree with the rationale.

      1. 13

        I like your version better as well. (I wrote the post.)

      1. 4

        I suggest the AI tag for this.

        It looks like what happened at some point in Idiocracy.

        1. 3

          After your suggestion I added the AI tag. Later user suggestions lead to an automatic removal again.

          I think a case can be made for either variant: On one hand the story itself does not involve AI. On the other hand AI surely is a field where blind trust in the results can throw people into similar or worse situations.

          1. 0

            You can see that the AI tag is appropriate here, if you know what an expert system is.

            The software terminated the employment without human intervention, and then proceeded to block his badge. sent mails to get him escorted out of the building, and so on…

            Given that no reason has been given in the comments, I can only speculate about their reasoning.

            Maybe any criticism to an AI application faces strong opposition from interested developers.
            The same removal occurred to an article about facial recognition (an application of computer vision) that I submitted recently: despite being evidently on-topic, the AI tag was removed by users’ suggestion.

            Probably a poor-man SEO to minimize ethical and/or political reflection about the topic.

            Or maybe they are just “last-minute AI experts” who genuinely ignore all the past research…

            Who knows?

        1. 4

          I wasn’t sure if practices is the best tag for this. Overall I think it’s relevant to remind us not to over-automate things and always leave enough options for manual intervention. Perhaps https://lobste.rs/s/5frjuu/youtube_blocks_blender_videos_worldwide also was a result of too “much” automation?

          1. 3

            Manual intervention yes, but observability the most. It’s ridiculous that no one could figure out what was happening.

            1. 2

              It’s ridiculous that no one could figure out what was happening.

              It’s not ridiculous, it’s a well known phenomenon dismissively called “the paradox of automation” (not a paradox at all, just a fundamental UI law: users are humans).

          1. 1

            I don’t get the difference between ‘Publish/Subscribe’ and ‘Custom Events’ in the article. To me it just looks like slightly different syntax for the same thing. What am I missing?

            1. 1

              Just found good explanation in difference between dispatcher and pub-sub

              This is different from generic pub-sub systems in two ways: Callbacks are not subscribed to particular events. Every payload is dispatched to every registered callback. Callbacks can be deferred in whole or part until other callbacks have been executed.

              https://facebook.github.io/flux/docs/dispatcher.html

              1. 1

                The difference is, with Custom Events you are working with Document object and each event is attached to Document, so if you have thousands it may cause some performance. In Publish/Subscribe approach you are working with Object and it is easier to unsubscribe all in one place. Basically, both approaches are same.

              1. 4

                Frequency deviations in Continental Europe including impact on electric clocks steered by frequency

                ​​Continental European Power System has been experiencing, since mid-January, continuous significant power deviations due to shortage in supply from one transmission system operator of the interconnected system. All actions are taken by the TSOs of Continental Europe and by ENTSO-E to resolve the situation.

                The power deviations have led to a slight drop in the electric frequency. This in turn has also affected those electric clocks that are steered by the frequency of the power system and not by a quartz crystal: they show currently a delay of 5 minutes. TSOs will set up a compensation program to correct the time in the future. ​

                (source: https://www.entsoe.eu/news-events/announcements/announcements-archive/Pages/News/Frequency-deviations-in-Continental-Europe-including-impact-on-electric-clocks-steered-by-frequency.aspx)

                So this is probably not about syncing absolute time, but about using the power grid’s frequency instead of a quartz crystal for the internal clock of a device. It’s probably cheaper to build devices that way.

                Allegedly one or two power companies aren’t properly fulfilling their contracts. However, official sources don’t want to name the guilty parties.

                For a live view of the grid frequency and the current time drift see https://www.swissgrid.ch/swissgrid/en/home/experts/topics/frequency.html

                1. 1

                  Update: Continuing frequency deviation in the Continental European Power System originating in Serbia/Kosovo: Political solution urgently needed in addition to technical

                  Quote:

                  The missing energy amounts currently to 113 GWh. The question of who will compensate for this loss has to be answered.

                1. 1

                  This seems like a niche begging for a product.

                  1. 2

                    You mean like Ubiquiti’s AmpliFi Teleport?

                    1. 1

                      Ubiquiti’s AmpliFi Teleport

                      I hadn’t seen that before. Looks very useful.

                  1. 1

                    I’ve read that some/most(?) Atom CPUs don’t have speculative execution or out of order execution. Is there a comprehensive List of x86_64 CPUs that have / don’t have those features?

                    1. 2

                      I believe the only remotely recent Intel chips that completely lack speculative/OoO features are the Atoms based on the first-gen Bonnell microarchitecture. That started off 32-bit-only, but some of them towards the end of the run do have x86-64 support, e.g. the Atom D5xx and S12xx.

                      1. 3

                        Do I understand this right? A website can include code that asks the mobile provider for subscriber data? The mobile provider sends this data back to the phone and the website can send the data back to somewhere else? So any app on the phone can access this API as well? So ads in websites/apps access it, too?

                        Some comments on HN paint a pretty unsettling picture, e.g. sales people calling you after visiting a site without entering any data (https://news.ycombinator.com/item?id=15477469)

                        1. 2

                          Apparently the data submitted to open.oneplus.net also contains geolocation (also suspected in the blog post, but I missed it the first time): https://forum.xda-developers.com/showpost.php?p=64497485&postcount=62

                          1. 2

                            Different venue, similar talk + Q&A and extra stories:

                            https://www.youtube.com/watch?v=e9ZWQ1nNLHk

                            1. 10

                              If you’re sensitive to latency and run Linux, try hitting Ctrl-Alt-F1, and do a little work in console mode at the terminal. (Ctrl-Alt-F7 to get back.)

                              For me this is a great illustration of how much latency there is in the GUI. Not sure if everyone can feel it, but to me console mode is more immediate and less “stuffy”.

                              (copy of HN comment)

                              1. 6

                                I notice this as well - the linux console feels better, in the same way that playing CS:GO without the overhead of a desktop compositor feels more immediate.

                                I’ve also noticed that tmux adds a lot of latency to vim in the terminal, so I’ve been running gvim or nvim-qt recently.

                                1. 8

                                  Tmux adds a lot of latency? Especially after pressing ESC in Neovim? Try set -g escape-time 10 in your ~/.tmux.conf

                                  https://github.com/neovim/neovim/wiki/FAQ#esc-in-tmux-or-gnu-screen-is-delayed

                                  1. 3

                                    Oh, man, I’ve seen that but didn’t quite put it together (or didn’t think to look for a setting to adjust). The delay to get an ESC to “stick” was crazy long, with the result that I couldn’t get out of insert mode without resorting to press escape, sit on hands, count to six, resume typing.

                                2. 5

                                  There is a whole lot more calculations to do in a graphical environment: align the character on the table, pick the font, compute font substitution (missing glyph get rendered with other fonts), render every glyph from their vectorial format, calculating antialiasing and hinting… and all of this on top of a framework, while it is either non-existant or built-in for text interfaces.

                                  A good compromise may be bitmap terminal (blit, acme, sam…).

                                  1. 8

                                    Happily using bitmap fonts with xterm.

                                    Drawing vector fonts is so darn slow that most things that do it at all will cache the rendered glyphs.

                                    1. 3

                                      I nominate st, the suckless terminal - http://st.suckless.org/ - It might not always the absolute fastest terminal (I’ve not tested it), and it might not have every feature anyone could ever (not) want like SIXEL and ReGIS Tektronix 4014 graphics, configurable logging, URL launching, user-tweakable selection behaviors and all that jazz that exists in xterm, but it is refreshingly simple, lightweight, and fast.

                                      1. 3

                                        And I like this term for this reason. The only 2 os I could not compile it so far is Windows and Android.

                                        1. 2

                                          st becomes a whole lot less simple and lightweight once you configure your shell to always spawn a new tmux session for every terminal just to get scrollback. I can appreciate simplicity, but there comes a point where the system becomes a lot more complex because one tool is a bit too simple.

                                          1. 2

                                            Since we’re discussing efficiency, tmux scrolling is also inefficient. All the data in the back buffer needs to be resent to xterm. If I’m on a bad network link, I’ll sometimes start another connection, sans tmux, to run a command so that all the output gets saved in my local buffer and I can scroll it without latency.

                                            1. 1

                                              This is a valid point which 5 years of mosh use has made me forget about - for those who are not aware, with mosh, you aren’t sending data to the terminal as a stream of data, but instead synchronizing an “image” of the current terminal state and display, so there was never any “past” data to scroll back on.

                                            2. 1

                                              I am sort of shocked that anyone uses the terminal scroll back in 2017 - I’ve been using tmux for almost 10 years now, but I was using xterm in combination with screen since the early 1990’s X11R4 days and always had my systems configured this way for almost 30 years. I find using a terminal multiplexer actually removes complexity and massively increases productivity and I have no idea how I’d operate without one.

                                              1. 3

                                                If I was using some floating window manager, having tmux for tiling would make sense. I use i3, so my window manager handles tiling in a much more flexible way than tmux ever could (not through any fault of tmux, it just cannot tile graphical applications), so tmux is mostly unnecessary unless I need some of the features related to multiplexing or persistent sessions.

                                          2. 2

                                            When I tried “alternative” OS’s in VM’s, they usually opened opened apps or responded to my typing instantly. They were that much faster in VM’s than the Linux system they ran on which was bare metal. My Windows systems were more responsive, too, back when I used them. I think it’s just the implementation in these GUI’s slowing things down that much.

                                          1. 2

                                            The site has a bad rel=canonical and Lobsters followed it. The author has no contact info or links to other online presence so there’s nothing to be done.

                                            1. 3

                                              Modify Lobsters to not follow rel=canonical unless it can resolve a page with status 2**?

                                              1. 1

                                                Good idea, posted an issue.

                                          1. 9

                                            The next step could be advertising networks that aggregate data across stores, as in “customer f5d9ad in front of screen 3, seen earlier today looking at shop window of $erotic_store for 23 seconds, walking by $liquor_store, buying $foo at $bar, …”

                                            Or is this already happening, too?

                                            1. 5

                                              It probably is yeah. The data is the product for a lot of companies.

                                              1. 2

                                                There’s some research into this and I remember one paper about using free open store wifi which many devices connect to automatically to track where people walk when they enter a store.

                                                Here’s an article that’s similar but not quite what I’m talking about: https://www.theguardian.com/technology/datablog/2014/jan/10/how-tracking-customers-in-store-will-soon-be-the-norm

                                                1. 1

                                                  They’re doing it on the web; might as well do it in meatspace too.

                                                1. 7

                                                  Earlier examples of this problem:

                                                  1. 1

                                                    Does somebody know a free recording of a good software engineering lecture/course with in-depth real world examples?

                                                    1. 3

                                                      Analysis of the malware: https://objective-see.com/blog/blog_0x1D.html

                                                      The author also notes that the detection rate for the infected .dmg file was 0/55 on VirusTotal (2017-05-06 20:12:15 UTC) and 0/56 for the contained OSX/Proton malware.

                                                      VirusTotal links: .dmg file malware’s persistent component

                                                      1. 8

                                                        To test it, try this:

                                                        printf 'GET /index.html HTTP/1.0\r\nAuthorization: Digest username="admin", realm="Digest:FFFF0000000000000000000000000000",  nonce="abcdefghijklmnopqrstuvxyzABCDEFG", uri="/index.html", response="", qop=auth, nc=00000001, cnonce="12345678"\r\n\r\n' | nc -v 192.168.0.42 16992
                                                        

                                                        Replace 192.168.0.42 with your target IP, this request will result in a 401. Look at the servers “WWW-Authenticate:” header and adopt the values for realm and nonce, try again.

                                                        1. [Comment removed by author]

                                                          1. 9

                                                            Rebase permanently destroys information, right?

                                                            Well, not really – it creates new commits that are altered versions of existing ones, but doesn’t delete the originals. (If the originals remain unreferenced by a tag or branch for long enough they’ll eventually get GCed, but if you do want to retain all the original information unmodified it’s easy enough to do so.)

                                                            1. 2

                                                              It’s only easy if you already know the hashes of the original commits. Git does not surface this information to you easily.

                                                            2. 5

                                                              As @1amzave said, the commits are around, see git reflog.

                                                              As for being lies, they cease being lies when merged to master. The ability to craft a clear log (better word than history) of commits before merging is a superpower.

                                                              1. 4

                                                                It also helps promote committing often, because it doesn’t matter if you have a “whoops forgot to initialize foo” commit in your feature branch, you can clean that up before merging to master.

                                                                1. 2

                                                                  Unfortunately this needs to be enforced at the code review level, which pull request workflows actively work against. Very often pull requests contain commits leaving the repo in an unbuildable state. I tend to think of VCS as an attempt to tell a story about how something was built rather than a reflection of what actually happened. Unfortunately this takes a lot of effort to pull off well…