1. 2

    any crustaceans heading there? (assuming tickets…)

    1. 2

      yup, I will.

      1. 2

        yes! It will be my 11th congress

        1. 2

          If I manage to get a ticket, yes.

          I really want to iterate, even though it’s a way longer drive for me, Leipzig congress center is way way way better than Hamburg’s. I hope it will stay in Leipzig forever.

          1. 1

            I was sloppy and missed the ticket window last year, going to try harder this year (and prebooked hotel) - was pretty satisfied with Hamburg (much less so Berlin - the grumpy tone in some of the various queues the final year there was very discerning) so slightly hyped if Leipzig is even nicer.

            1. 1

              I dunno, Leipzig is a bit too large for my taste. I liked the fact that I could easily run into people I know in Berlin and to some extend in Hamburg too, but it is impossible in Leipzig. It lost a bit of the “family gathering” vibe for me..

              1.  

                according to Wikipedia, the Hamburg location had 12´000 visitors and 15´000 for Leipzig. Does not sound like a big difference

                1.  

                  a venue at capacity with 12000 people is something else than a entire fair ground with even more halls to use. The size difference between Hamburg and Leipzig is quite substantial.

            2. 2

              If i can get a ticket yes.

              1. 1

                yes…

            1. 4

              Obviously, if we intend to make Wayland a replacement for X, we need to duplicate this functionality.

              Perhaps a less than popular opinion, but: No, you don’t. If you want to replace A with B, you don’t need to replicate every mistake A made. Then B wouldn’t be much else than A’, with old bugs and new.

              Don’t get me wrong, X’s network transparency might have been useful at some point - it isn’t now.

              1. 8

                Practice speaks otherwise, many people use it daily.

                1. 1

                  That a lot of people use something daily doesn’t mean it is good, or needs to be replicated exactly. Running GUI programs remotely, and displaying them locally IS useful. It does not require network transparency, though.

                  1. 1

                    Require? Perhaps not. Makes things easier on some ways though.

                2. 6

                  X’s network transparency might have been useful at some point - it isn’t now.

                  I use it 5+ days a week - it is still highly useful to me.

                  You’re right that fewer and fewer people know about it and use it - e.g. GTK has had a bug for many years that makes it necessary to stop Emacs after having opened a window remotely over X, and it’s not getting fixed, probably because X over network is not fashionable any more, so it isn’t prioritized.

                  1. 2

                    What is the advantage of X remoting over VNC / Remote Desktop?

                    I remember using it in the past and being confused that File -> Open wasn’t finding my local files, because it looks exactly like a local application.

                    I also remember that there were some bandwidth performance reasons. I don’t know if that is still applicable if applications use more of OpenGL and behave more like frame-buffers.

                    1. 7

                      Functional window management? If I resize a window to half screen, I don’t want to see only half of some remote window.

                      1. 2

                        Over a fast enough network, there’s no visible or functional difference between a local and remote X client. They get managed by the same wm, share the same copy/paste buffers, inherit the same X settings, and so on. Network transparency means just that: there’s no difference between local and remote X servers.

                        1. 1

                          It is faster, and you get just the window(s) of the application you start, integrated seamlessly in your desktop. You don’t have to worry about the other machine having a reasonable window manager setup, a similar resolution etc. etc.

                          In the old days people making browsers, e.g. Netscape, took care to make the application X networking friendly. That has changed, and using a browser over a VDSL connection is only useful in a pinch - but running something remote like (graphical) Emacs, I prefer to do over X.

                      2. 1

                        I’d like to see something in-between X and RDP. Window-awareness built-in, rather than assuming a single physical monitor, and window-size (and DPI) coming from the viewer would by themselves be a big start.

                        Edit: Ideally, pairing this with a standard format for streaming compressed textures, transforms, and compositor commands could solve a lot of problems, including the recent issue where we’re running up against physical bandwidth limitations trying to push pixels to multiple hi-def displays.

                        1. 2

                          FWIW I agree with you. It also so happens that something is coming soon enough .. https://github.com/letoram/arcan/wiki/Networking

                      1. 5

                        Grabbed myself a new microscope for the electronics lab that is slowly replacing what once was my kitchen. First target for testing its wings will be a WPC-89 MPU board from a pinball machine that refuse to boot.

                        If I manage to gather enough strength of will, there is this “ugly to reproduce” rare bug I have that somehow boils down to glibcs named semaphores that, in some edge conditions, seem to get the linux kernel futexes to break horribly in all processes with the semaphore, and I am anything but sure of who is to blame here.

                        1. 4

                          Feels wrong to have a PE article not referencing Ange Albertini everywhere somewhere, so take this in-depth dive: https://github.com/corkami/docs/blob/master/PE/PE.md or one of his many talks on the subject: https://www.youtube.com/watch?v=3duSgr5b1yc

                          1. 6

                            The weekend pre-release work mostly went as planned, with a few tests mandating additional work. As a lobste.rs teaser of one of the features: (youtube video) My WM is a file-system.

                            In the research department, I’m:

                            • looking into the specifics of compositing HDR and SDR formats, particularly metadata retrieval, colour formats, tone mapping etc.
                            • playing around with ‘cuckooing’ Android SurfaceFlinger, ripping out its internals, keeping its system interface.
                            • testing out packing formats for text that support mixing shaping, BiDi, 24-bit color, fixed and variable width.

                            Also stumbled across an eye tracker that will get some play-time.

                            1. 4

                              Soul crushing release management stuff: testing packaging, doing writeup and recording videos of key features etc. Will likely need to offset with about a gallon of gin and tonics.

                              1. 3

                                When I was in University, I remember our server room in the CS department had those massive/think books on X11 programming.

                                It’d be interesting to see a minimal X11 window manager vs a minimal Wayland composer. Although the compose is a lot more, correct? It’s not only the wm, but drawing/buffering to the screen right?

                                1. 4

                                  One out of many problems is that you can’t abstract away the window management details to a comparable set of mechanisms in the way shown in the article - coarse window management policies are encoded into the protocol (that’s the “shell”) and just the high level semantics of popup windows alone span reach into hundreds of lines of code.

                                  1. 3

                                    Someone reposted the link on HN, and there seems to have been a conversations just exactly about wayland/X11: https://news.ycombinator.com/item?id=17765851

                                  1. 65

                                    This blogpost is a good example of fragmented, hobbyist security maximalism (sprinkled with some personal grudges based on the tone).

                                    Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                    Talking about threat models, it’s important to start from them and that explains most of the misconceptions in the post.

                                    • Usable security for the most people possible. The vast majority people on the planet use iOS and Android phones, so while it is theoretically true that Google or Apple could be forced to subvert their OSs, it’s outside the threat model and something like that would be highly visible, a nuclear option so to speak.
                                    • Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.
                                    • Centralization is a feature. Moxie created a protocol and a service used by billions and millions of people respectively that provides real, measureable security for a lot of people. The fact is that doing all this in a decentralized way is something we don’t yet know how to do or doing invites tradeoffs that we shouldn’t make. Federation atm either leads to insecurity or leads to the ossification of the ecosystem, which in turn leads to a useless system for real users. We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol. Ever wonder why openpgp isn’t more widespread? Noone cares about security in a system where usability is low and design is fragile. Ever tried to do key rotation in gpg? Even cryptographers gave up on that. Signal has that built into the protocol.

                                    Were tradeoffs made? Yes. Have they been carefully considered? Yes. Signal isn’t perfect, but it’s usable, high-level security for a lot of people. I don’t say I fully trust Signal, but I trust everything else less. Turns out things are complicated when it’s about real systems and not fantasy escapism and wishes.

                                    1. 34

                                      Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                      In this article, resistance to governments constantly comes up as a theme of his work. He also pushed for his tech to be used to help resist police states like with the Arab Spring example. Although he mainly increased the baseline, the tool has been pushed for resisting governments and articles like that could increase perception that it was secure against governments.

                                      This nation-state angle didn’t come out of thin air from paranoid, security people: it’s the kind of thing Moxie talks about. In one talk, he even started with a picture of two, activist friends jailed in Iran in part to show the evils that motivate him. Stuff like that only made the stuff Drew complains about on centralization, control, and dependence on cooperating with surveillance organization stand out even more due to the inconsistency. I’d have thought he’d make signed packages for things like F-Droid sooner if he’s so worried about that stuff.

                                      1. 5

                                        A problem with the “nation-state” rhetoric that might be useful to dispel is the idea that it is somehow a God-tier where suddenly all other rules becomes defunct. The five-eyes are indeed “nation state” and has capabilities that are profound; like the DJB talk speculating about how many RSA-1024 keys that they’d likely be able to factor in a year given such and such developments and what you can do with that capability. That’s scary stuff. On the other hand, this is not the “nation state” that is Iceland or Syria. Just looking at the leaks from the “Hacking Team” thing, there are a lot of “nation states” forced to rely on some really low quality stuff.

                                        I think Greg Conti in his “On Cyber” setup depicts it rather well (sorry, don’t have a copy of the section in question) and that a more reasonable threat model of capable actors you do need to care about is that of Organized Crime Syndicates - which seems more approachable. Nation State is something you are afraid of if you are political actor or in conflict with your government, where the “we can also waterboard you to compliance” factors into your threat model, Organized Crime hits much more broadly. That’s Ivan with his botnet from internet facing XBMC^H Kodi installations.

                                        I’d say the “Hobbyist, Fragmented Maximalist” line is pretty spot on - with a dash of “Confused”. The ‘threats’ of Google Play Store (test it, write some malware and see how long it survives - they are doing things there …) - the odds of any other app store; Fdroid, the ones from Samsung, HTC, Sony et al. - being completely owned by much less capable actors is way, way higher. Signal (perhaps a Signal-To-Threat ratio?) perform an good enough job in making reasonable threat actors much less potent. Perhaps not worthy of “trust”, but worthy of day to day business.

                                      2. 18

                                        Expecting Signal to protect anyone specifically targeted by a nation-state is a huge misunderstanding of the threat models involved.

                                        And yet, Signal is advertising with the face of Snowden and Laura Poitras, and quotes from them recommending it.

                                        What kind of impression of the threat models involved do you think does this create?

                                        1. 5

                                          Who should be the faces recommending signal that people will recognize and listen to?

                                          1. 7

                                            Whichever ones are normally on the media for information security saying the least amount of bullshit. We can start with Schneier given he already does a lot of interviews and writes books laypeople buy.

                                            1. 3

                                              What does Schneier say about signal?

                                              1. 10

                                                He encourages use of stuff like that to increase baseline but not for stopping nation states. He adds also constantly blogged about the attacks and legal methods they used to bypass technical measures. So, his reporting was mostly accurate.

                                                We counterpoint him here or there but his incentives and reo are tied to delivering accurate info. Moxie’s incentives would, if he’s selfish, lead to locked-in to questionable platforms.

                                        2. 18

                                          We’ve had IRC from the 1990s, ever wonder why Slack ever became a thing? Ossification of a decentralized protocol.

                                          I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                          If you actually look at the protocols? Slack is a clear case of Not Invented Here syndrome. Slack’s interface is not only slower, but does some downright crazy things (Such as transliterating a subset of emojis to plain-text – which results in batshit crazy edge-cases).

                                          If you have a free month, try writing a slack client. Enlightenment will follow :P

                                          1. 9

                                            I’m sorry, but this is plain incorrect. There are many expansions on IRC that have happened, including the most recent effort, IRCv3: a collectoin of extensions to IRC to add notifications, etc. Not to mention the killer point: “All of the IRCv3 extensions are backwards-compatible with older IRC clients, and older IRC servers.”

                                            Per IRCv3 people I’ve talked to, IRCv3 blew up massively on the runway, and will never take off due to infighting.

                                            1. 12

                                              And yet everyone is using Slack.

                                              1. 14

                                                There are swathes of people still using Windows XP.

                                                The primary complaint of people who use Electron-based programs is that they take up half a gigabyte of RAM to idle, and yet they are in common usage.

                                                The fact that people are using something tells you nothing about how Good that thing is.

                                                At the end of the day, if you slap a pretty interface on something, of course it’s going to sell. Then you add in that sweet, sweet Enterprise Support, and the Hip and Cool factors of using Something New, and most people will be fooled into using it.

                                                At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on: https://ircv3.net/specs/extensions/batch/chathistory-3.3.html)

                                                1. 9

                                                  At the end of the day, Slack works just well enough Not To Suck, is Hip and Cool, and has persistent history (Something that the IRCv3 group are working on […])

                                                  The time for the IRC group to be working on a solution to persistent history was a decade ago. It strikes me as willful ignorance to disregard the success of Slack et al over open alternatives as mere fashion in the face of many meaningful functionality differences. For business use-cases, Slack is a better product than IRC full-stop. That’s not to say it’s perfect or that I think it’s better than IRC on all axes.

                                                  To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool? But imagine being a UX designer and wanting to help make some native open-source IRC client fun and easy to use for a novice. “Sisyphean” is the word that comes to mind.

                                                  If we want open solutions to succeed we have to start thinking of them as products for non-savvy end users and start being honest about the cases where closed products have superior usability.

                                                  1. 5

                                                    IRC isn’t hip and cool because people can’t make money off of it. Technologies don’t get investment because they are good, they get good because of investment. The reason that Slack is hip/cool and popular and not IRC is because the investment class decided that.

                                                    It also shows that our industry is just a pop culture and can give a shit about good tech .

                                                    1. 4

                                                      There were companies making money off chat and IRC. They just didn’t create something like Slack. We can’t just blame the investors when they were backing companies making chat solutions whose management stayed on what didn’t work in long-term or for huge audience.

                                                      1. 1

                                                        IRC happened before the privatization of the internet. So the standard didn’t lend itself well for companies to make good money off of it. Things like slack are designed for investor optimization, vs things like IRC being designed for use and openness.

                                                        1. 2

                                                          My point was there were companies selling chat software, including IRC clients. None pulled off what Slack did. Even those doing IRC with money or making money off it didn’t accomplish what Slack did for some reason. It would help to understand why that happened. Then, the IRC-based alternative can try to address that from features to business model. I don’t see anything like that when most people that like FOSS talk Slack alternatives. Then, they’re not Slack alternatives if lacking what Slack customers demand.

                                                          1. 1

                                                            Thanks for clarifying. My point can be restated as… There is no business model for federated and decentralized software (until recently , see cryptocurrencies). Note most open and decentralized tech of the past was government funded and therefore didn’t face business pressures. This freed designets to optimise other concerns instead of business onrs like slack does.

                                                    2. 4

                                                      To the extent that Slack did succeed because it was hip and cool, why is that a negative? Why can’t IRC be hip and cool?

                                                      The argument being made is that the vast majority of Slack’s appeal is the “hip-and-cool” factor, not any meaningful additions to functionality.

                                                      1. 6

                                                        Right, as I said I think it’s important for proponents of open tech to look at successful products like Slack and try to understand why they succeeded. If you really think there is no meaningful difference then I think you’re totally disconnected from the needs/context of the average organization or computer user.

                                                        1. 3

                                                          That’s all well and good, I just don’t see why we can’t build those systems on top of existing open protocols like IRC. I mean: of course I understand, it’s about the money. My opinion is that it doesn’t make much sense to insist that opaque, closed ecosystems are the way to go. We can have the “hip-and-cool” factor, and all the amenities provided by services like Slack, without abandoning the important precedent we’ve set for ourselves with protocols like IRC and XMPP. I’m just disappointed that everyone’s seeing this as an “either-or” situation.

                                                          1. 2

                                                            I definitely don’t see it as an either-or situation, I just think that the open source community typically has the wrong mindset for competing with closed products and that most projects are unapproachable by UX or design-minded people.

                                                    3. 3

                                                      Open, standard chat tech has had persistent history and much more for decades in the form of XMPP. Comparing to the older IRC on features isn’t really fair.

                                                      1. 2

                                                        The fact that people are using something tells you nothing about how Good that thing is.

                                                        I have to disagree here. It shows that it is good enough to solve a problem for them.

                                                        1. 1

                                                          I don’t see how Good and “good enough to solve a problem” are related here. The first is a metric of quality, the second is the literal bare minimum of that metric.

                                                  2. 1

                                                    Alternative distribution mechanisms are not used by 99%+ of the existing phone userbases, providing an APK is indeed correctly viewed as harm reduction.

                                                    I’d dispute that. People who become interested in Signal seem much more prone to be using F-Droid than, say, WhatsApp users. Signal tries to be an app accessible to the common person, but few people really use it or see the need… and often they are free software enthusiasts or people who are fed up with Google and surveillance.

                                                    1. 1

                                                      More likely sure, but that doesn’t mean that many of them reach the threshold of effort that they do.

                                                    2. 0

                                                      Ossification of a decentralized protocol.

                                                      IRC isn’t decentralised… it’s not even federated

                                                      1. 3

                                                        Sure it is, it’s just that there are multiple federations.

                                                    1. 2

                                                      In general, I agree with the idea and setup of unveil() though I havn’t had much time to experiment with it yet. Something that irks me a bit though is that there doesn’t seem to be a way to hide a previously unveiled path - either that or I am greatly misreading the man page.

                                                      The case I have in mind is that I have a set of namespaces (so paths) that are conditionally accessible (on path and rwx- mode) based on where the call originates from (function in user provided scripting interface) and these may need to route through third party libraries, thus I routinely want to mask/unmask path - so where’s reveil()? :-)

                                                      1. 15

                                                        The idea is you structure your program so it isn’t bouncing between privilege levels. It should not be possible to ever climb back out of a position of limited access. Sometimes this means using something like privilege separation where different processes work together, passing file descriptors, etc.

                                                        1. 2

                                                          I understand that and to the largest extent possible given other constraints I do privsep and juggle descriptors around, but in those cases I can typically pledge without rpath/wpath so the value of unveil there is rather limited.

                                                          Think of trying to unveil-harden something like Wireshark as it has a similar pattern to what I’m describing - there is some ‘light’ privsep in the form of the Lua interpreter/JIT: when the process is in that execution context, few file operations should really be exposed, possibly some temp- store. The other execution context, so the “engine”, need to be able to load / save from a much wider set of paths. Sure it can be resectioned into better process privsep etc. but the amount of work gets substantial and would lead to the same situation as above, rpath/wpath likely won’t be needed.

                                                          1. 2

                                                            I would just use something like capnproto across processes.

                                                            1. 1

                                                              Yeah, the microkernels like OKL4 used an IDL with tools like CAmkES to get components to work with each other. Far as Cap n Proto, I actually recommended that same thing to people wanting to build stuff on separation kernels after talking to its author, Kenton Varda. He was definitely well-read on capability-security research and projects. Inspires confidence. The other cool thing is you don’t give up performance to get security with Cap’ n Proto. I love it when that happens.

                                                      1. 4

                                                        Seems fair I guess. They probably made thousands of easy ad dollars off Nintendo’s property, so it’s normal they have a problem with this.

                                                        1. 4

                                                          However, is Nintendo actually making profit of the original Zelda, for example? I mean, is there a way for me as a player to get to play the original Zelda without having to search for a second hand NES and fishing for the original cartridge in flea markets? I get that is their intellectual property, but still it’s not like they still sell those games

                                                          1. 18

                                                            The current philosophy of the law is that Nintendo has an eternal right to tax Zelda. It was never meant to go into the public domain, will never go into the public domain, and if legislators have funny ideas about this stuff then they’ll use their billions of previous culture tax revenue to bribe (er… “lobby”) them to have the right ideas again.

                                                            Anyone who gripes about this state of affairs is obviously a commie trying to steal from them.

                                                            1. 2

                                                              In my understanding, in France and probably other countries, works (not sure what, but writings and musics are included for example, probably programs/video games?) enter public domain 70 years after creator’s death.

                                                              How can this apply to a living company?

                                                              1. 2

                                                                The original author(s) license (indirect in employment contract or direct via a specific one) rights to the work. The ‘death’ clause becomes really gnarly when the actual work of art is an aggregate of many copyright holders.

                                                                This becomes more complicated as the licensing gets split up into infinitely small pieces, like “time-limited distribution within country XYZ on the media of floppy discs”. Such time-limit clauses are a probable cause when contents to whole games suddenly disappear, typically sublicensed contents like music.

                                                                This, in turn, gets even more complicated by the notion of ‘derivative’ work; fanart or those “HD remakes” as even abstract nuances have to be considered. The stories about Sherlock Holmes are in the public domain, but certain aesthetics, like the deerstalker/pipe/… figure - are still(?) copyrighted. Defining ‘derivative’ work is complex in and of itself. For instance, Blizzard have successfully defended copyright of the linked and loaded process of the World of Warcraft client as such, in the case against certain cheat-bots - and similar shenanigans to take down open source / reversed starcraft servers.

                                                                Then a few years pass and nobody knows who owns what or when or where, copyright trolls dive in and threaten extortion fees based on rights they don’t have. Copyright in its current form has nothing to do about the ‘artist’ and is complete, depressing, utter bullshit - It has turned into this bizarre form of mass hypnosis where everyone gets completely and thoroughly screwed.

                                                                These aspects, when combined, is part of the reason as to why “sanctioned ROM stores” that virtual console and so on holds have very limited catalogs, the rightsholders are nowhere to be found and can’t be safely licensed.

                                                            2. 10

                                                              Yep, Nintendo do still sell these games, and it is possible for you to buy them. I bought one of these last week.

                                                              https://www.nintendo.com/nes-classic/

                                                              1. 2

                                                                They also still sell them on the Wii U and 3DS Virtual Consoles.

                                                                1. 1

                                                                  Oh sure, I totally forgot about those new editions, you’re right

                                                                  1. 2

                                                                    I just got a NES Classic and SNES Classic. They are pretty dope! I think that they are starting to care a lot more now that these are a thing :)

                                                                    This does, however, have the unfortunate side effect of players not being able to play their favorites unless they are one of the ~60 games on these two classic editions. So, that’s sad. :(

                                                            1. 10

                                                              Seems like the hunt for these things is regressing to the 90’ies days of organisations like IDSA ( https://cs.stanford.edu/people/eroberts/cs201/projects/copyright-infringement/emulationanti.html ) where such takedowns were a common thing.

                                                              Now, “disneyright” seem to have zero chance of returning to anything sensible, i.e. a time-limited monopoly to give the author a fair chance to profit from his work, before it is forcibly elevated into the public domain for the benefit of current and future culture.

                                                              Meanwhile, the “corporation sanctioned alternatives” (various ‘virtual console’ stores, including the one by nintendo) have shown to be extremely volatile and customer unfriendly.

                                                              I have a faint hope that developments like this instills enough ‘disobedience’ to foster new piracy tools for discovering, sharing and curating emulation and related assets (including derivative work like gameplay streaming) - without ad-parasites or the unreliability and user-unfriendliness of torrents.

                                                              1. 14

                                                                To me, the ironic thing here is that old ROMs were abandonware and not commercially available for quite a while before Nintendo discovered how popular they were. Only then did they start their virtual console store. (Note, most of these NES titles were never owned by Nintendo, and in many cases the entities that did hold the copyrights are long defunct. But lets just keep talking about NES Zelda, as available on the current 3DS VC store…)

                                                                And that’s why their “NES Classic” console is essentially a Raspberry Pi running an emulator. Very nice of the community to do all that development work for the brand owner to profit from!

                                                                1. 11

                                                                  Even their earlier efforts, the virtual console, had iNES headers of the roms they were using in their “inspired” emulators. What are the chances that they had preserved dumps and maintained their in-house emulators (they did have those) themselves vs. outsourcing the job to some firm that took whatever dumps they could find and repurposed an open source emulator.

                                                                  Sidenote: I’ve been on the preservation side of emulation since about the mid-90ies, I restore pinball and arcade machines as a hobby and have quite a big emotional attachment to the ‘culture’ from that era. As such, I happily throw in both money and code to projects like mame and ‘the dumping union’ (procuring rare and dying arcade PCBs, dumping them for the mame devs to take over). It pains me dearly that there are not valid documentation (3D models, …) of now dead and dying arcade cabinets and other artifacts of that era.

                                                              1. 5

                                                                The current state of (dis-) repair is depressing when compared to our past prime. I’d highly recommend skimming through history in https://archive.org/details/manuals for old device manuals in terms of repairability and consumer education. Should we raise the bar today, there should be mastering- files included with the purchase, whether it is for circuits, plastic, …

                                                                I read the article just after finishing up a pinball repair session (highly recommended hobby). On top of the machines themselves being some kind of interesting marriage between mechanics, electronics, embedded software, art and culture - these things come with detailed schematics of the parts, the electronics, design explanation, replacement part lists, the works. Not that they are not without fault, the software+DRM mentality from the mid-80ies and onwards is there - albeit against the tools of today, a nuisance - not a threat.

                                                                Maybe a limiting move, but my personal principle these days is that if its embedded electronics and I don’t have open access to tinkering with it, I’m not buying.

                                                                1. 4

                                                                  I just had a skim of that archive.org page and it’s incredible (Arcade machines were before my time). The current method of repairing stuff is just finding the module that’s broken and swapping the entire thing, this is even worse on the horror devices coming from Apple.

                                                                  The basic flaw is that these ultra-thin keys are easily paralyzed by particulate matter. Dust can block the keycap from pressing the switch

                                                                  The keyboard itself can’t simply be swapped out. You can’t even swap out the upper case containing the keyboard on its own. You also have to replace the glued-in battery, trackpad, and speakers at the same time.

                                                                  (https://ifixit.org/blog/10229/macbook-pro-keyboard/)

                                                                  I really don’t know what can be done about this issue, few care because the cost of electronics is so low I could buy a new phone every few months and not care.

                                                                  There is some push for open hardware and firmware but I have doubts if it will ever take off outside of hobby groups.

                                                                1. 1

                                                                  So constant time comparison is an old classic for authentication primitives, even humorous examples (not to mention tons of websites) like passwords on some JTAG interfaces on old consoles having a return on first mismatch making a timing based extraction of the password trivial.

                                                                  According to the webpage, this was developed for post-quantum cryptography - but what other areas are there where data-dependent sorting times would be a notable risk?

                                                                  1. 2

                                                                    I think it’s just for crypto, as elaborated on page 48 of this paper. source

                                                                  1. 5

                                                                    One of my absolute favourite tools, but one that is in desperate need of

                                                                    1. a ‘non-web’ version (good TUI candidate),
                                                                    2. (the much more difficult) extension that jacks into cmake/whatever-build-system, runs multiple gcc/clang versions side by side and gives “for my chosen function(s)” feedback of assembly, IR, post-macro expansion and SSA forms, with tags on inlining and UB assumptions.
                                                                    1. 1

                                                                      Well, the first part’s done already; somebody in a Rust Discord server I hang out in posted it: https://github.com/ethanhs/cce

                                                                    1. 25

                                                                      What I found most interesting in this piece (so in other words, it resonates with my personal motivation and bias to a large degree) is down in the comments (Mathias Hasselmann):

                                                                      “They simply don’t know their target audience. They design with that toxic idea in mind that “Grandma and Grandma must be able to use this”, entirely ignoring that world outside their ivory towers, where causal users happily do all their computing needs on mobile devices, not desktops. The desktop has shrunk. It’s not mainstream anymore. It’s a tool for information workers again, and making the desktop useless for information works will not bring a single mobile user back, but it will scare away more and more professionals for Linux at least.”

                                                                      I have a much longer diatribe in the out-queue as it strongly relates to my projects and frustration with how user integration with computing is developing, but really - the concessions everyone (from OSX to Windows and onwards) seem to make race away from “silent/passive by default, configurable mechanisms to your desires” towards “preset hidden policies to match our perception of what you want - it just works” rather than advancing the former to be more ergonomic, discoverable etc.

                                                                      1. 2

                                                                        For someone who knows absolutely nothing about gaming, World of Warcraft, or this thing in particular… what is this? Can someone explain?

                                                                        1. 4

                                                                          World of Warcraft is a massively popular 16 years old game, and maybe one of the most popular games ever. Since its launch in 2004 it’s been changing and evolving into what it is today, which is something completely different to what it was in its inception. Given that a large number of people would like to play the Vanilla WoW, that is, the first version of the game before any expansion was released, Blizzard has decided to roll out a “classic” version with all the content prior to the first expansion. This expansion was released in late 2006, and since there have been many more. Before the company’s official announcement that they would be releasing this classic version, many requests were made for it by fans but they were turned down by Blizzard citing several arguments such as: “the vanilla wow doesn’t exist anymore since the codebase has continued to evolve” and “Vanilla wow would be looking back and we want to move forward”. However, a vanilla WoW paid server named Nostalrius, maintained by fans for fans gained such popularity that during its peak it had more than 100k players on it. Sadly, it had to be closed in 2016 after Blizzard sent them a cease and desist order. It would seem that from the whole episode the company realized that there was actually a market for a classic WoW and they eventually changed their mind.

                                                                          1. 6

                                                                            World of Warcraft is a popular commercial subscription-based cloud-hosted enterprise legacy app featuring a low grade CRM system married to a highly complex logistics system in a standard 3 tier architecture deployed in a fully sharded configuration. Like many legacy systems, it has undergone significant schema mutation over the course of its deployed lifecycle in response to customer demand. Notably, it started out with a mostly-denormalized schema and, with the advent of improved database performance, a better understanding of the customer base’s requirement envelope, and feature creep, it has moved towards Codd’s 3rd normal form.

                                                                            As with many legacy apps, some customers’ business needs mandate that they stay pinned to older versions of the app. Interestingly, customers have here asked that an earlier version of a cloud-provided app be made available 12 years later, which poses some interesting issues having to do with incompatible schema migration. Given that the app is also written in a mix of obscure legacy languages, the traditional approach of simply migrating the queries and schema together is technically formidable.

                                                                            One established practice here is to create a proxy facade layer. In this pattern, you keep the interface to the legacy client application exactly the way it is, but create an intermediate layer which translates the db calls to and from the normalized format. This incurs round trip cost and bugs are common in edge cases, especially in frequently-undocumented minor shifts in API and field meaning, and especially given the expected low coverage of unit and functional tests in a 12 year old codebase. This technique is frequently overused owing to underestimation of the cost and time complexity of ferreting out the edge cases.

                                                                            The other established practice is to perform a one-time wholesale schema migration, normally done either through an ETL toolchain like Informatica, or more commonly through hand-written scripts. This approach frequently takes more developer time than the facade approach, owing to needing to “get it right” essentially all-at-once, and having a very long development loop.

                                                                            Whatever the technique used, schema migration programs of this scope need a crisp definition of what success looks like that’s clearly understood by all the involved developers, project managers, data specialists, and product leaders. Too frequently, these types of programs fail owing to incomplete specification and lack of clearly defined ownership boundaries and deliverable dependencies. The industry sector in which this legacy app resides is at greater than average risk for failure of high-scope projects due to fundamental and persistent organizational immaturity and improperly managed program scopes.

                                                                            Also, they better not nerf fear, because rogues were super OP in vanilla and getting the full 40 down the chain to rag with portals was tough enough.

                                                                            1. 2

                                                                              As someone who levelled through Stranglethorn Vale via painstaking underwater+Unending Breath grinds in order to escape OP rogue stunlock love, I say to you: Bravo Sir!. Also, f**k the debuf cap.

                                                                          1. 5

                                                                            Last week

                                                                            • Fiddled with different ways of attaching to processes and viewing their states.
                                                                            • Some other technical stuff that went well

                                                                            This was for the low level debugger I’m trying to make.

                                                                            So, from what I’ve read and seen, tools that attach and inspect other process tend to just use gdb under the hood. I was hoping for a more minimal debugger to read and copy.

                                                                            lldb almost does what I need because of its existing external Python interface but documentation for writing a stand-alone tool (started from outside the debugger rather than inside) is scattered. I haven’t managed to make it single step.

                                                                            Using raw ptrace and trying to read the right memory locations seems difficult because of things like address randomization. And getting more information involves working with even more memory mapping and other conventions.

                                                                            I wish all these conventions were written in some machine readable language agnostic way so I don’t have to human-read each one and try to implement it. Right now this is all implicit in the source code of something like gdb. This is a lot of extra complexity which has nothing to do with what I’m actually trying to accomplish.

                                                                            The raw ptrace approach would also likely only work for Linux. And possibly strong tied to C or assembly.

                                                                            The problem with the latter is that eventually I will want to do this to interpreters written in C or even interpreters written in interpreters written in C. Seems like even more incidental complexity in that way.

                                                                            An alternative is to log everything and have a much fancier log viewer after the fact. This way the debugged program only need to emit the right things to a file or stdout. But this loses the possibility for any interactivity.

                                                                            Plus, all of this would only be worth it if I can get some state visualization customizable to that specific program (because usually it will be an interpreter).

                                                                            Other questions: How to avoid duplicating the work when performing operations from “inside the program” and from “outside” through the eventual debugger?

                                                                            Other ideas: Try to do this with a simpler toy language/system to get an idea of how well using such a workflow would work in the first place.

                                                                            Some references

                                                                            This week

                                                                            • Well, now that I have a better idea of how deep this rabbit hole is, I need to decide what to do. Deciding is much harder than programming…
                                                                            • Or maybe I should do one of the other thousand things I want to and have this bit of indecision linger some more.
                                                                            1. 5

                                                                              I wrote a very simple PoC debugger in Rust if you are interested in the very basics: https://github.com/levex/debugger-talk

                                                                              It uses ptrace(2) under the hood, as you would expect.

                                                                              1. 1

                                                                                Thanks! I’ve a had a look at your slide and skimmed some of your code (don’t have Rust installed or running would be the first thing I’d do).

                                                                                I see that you’re setting breakpoints by address. How do you figure out the address at which you want to set a breakpoint though?

                                                                                How long did it take to make this? And can you comment on how hard it would be to continue from this point on? For example reading C variables and arrays? Or getting line numbers from the call stack?

                                                                                1. 2

                                                                                  Hey, sorry for the late reply!

                                                                                  In the talk I was setting breakpoints by address indeed. This is because the talked focused on the lower-level parts To translate line numbers into addresses and vice-versa you need access to the “debug information”. This is usually stored in the executable (as decribed by the DWARF file format). There are libraries that can help you with this (just as the disassembly is done by an excellent library instead of my own code).

                                                                                  This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                                                                                  Reading C variables is already done :-), reading arrays is just a matter of a new command and reading variables sequentially.

                                                                                  1. 1

                                                                                    Thanks for coming back to answer! Thanks to examples from yourself and others I did get some stuff working (at least on the examples I tried) like breakpoint setting/clearing, variable read/write and simple function calls.

                                                                                    Some things from the standards/formats are still unclear, like why I only need to add the start of the memory region extracted from /proc/pid/maps if its not 0x400000.

                                                                                    This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                                                                                    A week doesn’t sound too bad. Unfortunately, I’m in the opposite situation using a familiar system to do something unfamiliar.

                                                                                    1. 2

                                                                                      I think that may have to do with whether the executable you are “tracing” is a PIE (Position-Independent Executable) or not.

                                                                                      Good luck with your project, learning how debuggers work by writing a simple one teaches you a lot.

                                                                                  2. 2

                                                                                    For C/assembly (and I’ll assume a modern Unix system) you’ll need to read up on ELF (object and executable formats) and DWARF (debugging records in an ELF file) that contain all that information. You might also want to look into the GDB remote serial protocol (I know it exists, but I haven’t looked much into it).

                                                                                    1. 1

                                                                                      Well, I got some addresses out of nm ./name-of-executable but can’t peek at those directly. Probably need an offset of some sort?

                                                                                      There’s also dwarfdump I haven’t tried yet. I’ll worry about how to get this info from inside my tracer a bit later.

                                                                                      Edit: Nevermind, it might have just been the library I’m using. Seems like I don’t need an offset at all.

                                                                                      1. 2

                                                                                        I might have missed some other post, but is there a bigger writeup on this project of yours? As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                                                                                        1. 1

                                                                                          I might have missed some other post, but is there a bigger writeup on this project of yours?

                                                                                          I’m afraid not, at least for the debugger subproject. This is the context. The debugger would fit in two ways:

                                                                                          • Since I have a GUI maker, I can try to use it to make a graphical debugger. (Ideally, allowing custom visualizations created for each new debugging task.)
                                                                                          • A debugger/editor would be useful for making and editing [Flpc]((github.com/asrp/flpc) or similar. I want to be able to quickly customize the debugger to also be usable as an external Flpc debugger (instead of just a C debugger). In fact, it’d be nice if I could evolve the debugger and target (=interpreter) simultaneously.

                                                                                          Although I’m mostly thinking of using it for the earlier stages of development. Even though I should already be past that stage, if I can (re)make that quickly, I’ll be more inclined to try out major architectural changes. And also add more functionality in C more easily.

                                                                                          Ideally, the debugger would also be an editor (write a few instructions, set SIGTRAP, run, write a few more instructions, etc; write some other values to memory here and there). But maybe this is much more trouble than its worth.

                                                                                          Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                                                                          As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                                                                                          I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                                                                          Hopefully I will remember this link if I need something like that later on.

                                                                                          1. 2

                                                                                            I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                                                                            My recommendation, coolness aside, for the ECFS part is that Ryan is pretty darn good with the ugly details of ELF and his code and texts are valuable sources of information on otherwise undocumented quirks.

                                                                                            Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                                                                            I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features. The cases I use it for on about a weekly basis touch upon materials that are NDAd.

                                                                                            I have a blogpost coming up on how the full stack itself map into debugging and what the full stack is building towards, but the short short (yet long, sorry for that, the best I could do at the moment) version:

                                                                                            Ingredients:

                                                                                            Arcan is a display server - a poor word for output control, rendering and desktop IPC subsystem. The IPC subsystem is referred to as SHMIF. It also comes with a mid level client API: TUI which roughly correlates to ncurses, but with more desktop:y featureset and sidesteps terminal protocols for better window manager integration.

                                                                                            The SHMIF IPC part that is similar to a ‘Window’ in X is referred to as a segment. It is a typed container comprised of one big block (video frame), a number of small chunked blocks (audio frames), two ring buffers as input/output queue that carry events and file descriptors.

                                                                                            Durden act a window manager (Meta-UI).This mostly means input mapping, configuration tracking, interactive data routing and window layouting.

                                                                                            Senseye comes in three parts. The data providers, sensors, that have some means of sampling with basic statistics (memory, file, ..) which gets forwarded over SHMIF to Durden. The second part is analysis and visualization scripts built on the scripting API in Arcan. Lastly there are translators that are one-off parsers that take some incoming data from SHMIF, parses it and renders some human- useful human- level output, optionally annotated with parsing state metadata.

                                                                                            Recipe:

                                                                                            A client gets a segment on connection, and can request additional ones. But the more interesting scenario is that the WM (durden in this case) can push a segment as a means of saying ‘take this, I want you to do something with it’ and the type is a mapping to whatever UI policy that the WM cares about.

                                                                                            One such type is Debug. If a client maps this segment, it is expected to populate it with whatever debugging/troubleshooting information that the developer deemed relevant. This is the cooperative stage, it can be activated and deactivated at runtime without messing with STDERR and we can stop with the printf() crap.

                                                                                            The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead. Like provide a default debug interface preparing the process to attach a debugger, or activate one of those senseye sensors, or …

                                                                                            Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                                                                            Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                                                                            So what I’m looking into right now is writing the “fallback” debug interface, with some nice basics, like stderr redirect, file descriptor interception and buffer editing, and a TUI for lldb to go with it ;-)

                                                                                            The long term goal for all this is “every byte explained”, be able to take something large (web browser or so) and have the tools to sample, analyse, visualise and intercept everything - show that the executing runtime is much more interesting than trivial artefacts like source code.

                                                                                            1. 1

                                                                                              Thanks! After reading this reply, I’ve skimmed your lastest post submitted here and on HN. I’ve added it to my reading list to considered more carefully later.

                                                                                              I don’t fully understand everything yet but get the gist of it for a number of pieces.

                                                                                              I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features.

                                                                                              Thanks, this gives me a better understanding. I wouldn’t minding seeing more examples like this, even if contrived.

                                                                                              In my case I’m not (usually) manipulating (literal) images or video/audio streams though. Do you think your project would be very helpful for program state and execution visualization? I’m thinking of something like Online Python Tutor. (Its sources is available but unfortunately everything is mixed together and its not easy to just extract the visualization portion. Plus, I need it to be more extensible.)

                                                                                              For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                                                                              The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead.

                                                                                              Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                                                                              Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                                                                              Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                                                                              Very nice! Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                                                                              1. 3

                                                                                                Just poke me if you need further clarification.

                                                                                                For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                                                                                The real tool is highly interactive, it’s the basic mode of operation, it’s just the UI that sucks and that’s why it’s being replaced with Durden that’s been my desktop for a while now. This video shows a more interactive side: https://www.youtube.com/watch?v=WBsv9IJpkDw Including live sampling of memory pages (somewhere around 3 minutes in).

                                                                                                Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                                                                                It needs both, I have a big bag of tricks for the ‘in process’ part, and with YAMA and other restrictions on ptrace these days the process needs some massage to be ‘external debugger’ ready. Though some default of “immediately do this” will likely be possible.

                                                                                                I’ve so far just thought about it interactively with the sortof goal that it should be, at most, 2-3 keypresses from having a window selected to be digging around inside it’s related process no matter what you want to measure or observe. https://github.com/letoram/arcan/blob/master/src/shmif/arcan_shmif_debugif.c ) not finished by any stretch binds the debug window to the TUI API and will present a menu.

                                                                                                Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                                                                                Exactly.

                                                                                                1. 2

                                                                                                  Thanks. So I looked a bit more into this.

                                                                                                  I think the most interesting part for me at the moment is the disassembly.

                                                                                                  I tried to build it just to see. I eventually followed these instructions but can’t find any Senseye related commands in any menu in Durden (global or target).

                                                                                                  I think I managed to build senseye/senses correctly.

                                                                                                  Nothing obvious stands out in tools. I tried both symlinks

                                                                                                  /path/to/durden/durden/tools/senseye/senseye
                                                                                                  /path/to/durden/durden/tools/senseye/senseye.lua
                                                                                                  

                                                                                                  and

                                                                                                  /path/to/durden/durden/tools/senseye
                                                                                                  /path/to/durden/durden/tools/senseye.lua
                                                                                                  

                                                                                                  Here are some other notes on the build process

                                                                                                  Libdrm

                                                                                                  On my system, the include -I/usr/include/libdrm and linker flag -ldrm are needed. I don’t know cmake so don’t know where to add them. (I manually edited and ran the commands make VERBOSE=1 was running to get around this.)

                                                                                                  I had to replace some CODEC_* with AV_CODEC_*

                                                                                                  Durden

                                                                                                  Initially Durden without -p /path/to/resources would not start saying some things are broken. I can’t reproduce it anymore.

                                                                                                  Senseye
                                                                                                  cmake -DARCAN_SOURCE_DIR=/path/to/src ../senses
                                                                                                  

                                                                                                  complains about ARCAN_TUI_INCLUDE_DIR and ARCAN_TUI_LIBRARY being not found:

                                                                                                  Make Error: The following variables are used in this project, but they are set to NOTFOUND.
                                                                                                  Please set them or make sure they are set and tested correctly in the CMake files:
                                                                                                  ARCAN_TUI_INCLUDE_DIR
                                                                                                  
                                                                                                  Capstone

                                                                                                  I eventually installed Arcan instead of just having it built and reached this error

                                                                                                  No rule to make target 'capstone/lib/libcapstone.a', needed by 'xlt_capstone'.
                                                                                                  

                                                                                                  I symlinked capstone/lib64 to capstone/lib to get around this.

                                                                                                  Odd crashes

                                                                                                  Sometimes, Durden crashed (or at least exited without notice) like when I tried changing resolution from inside.

                                                                                                  Here’s an example:

                                                                                                  Improper API use from Lua script:
                                                                                                  	target_disphint(798, -2147483648), display dimensions must be >= 0
                                                                                                  stack traceback:
                                                                                                  	[C]: in function 'target_displayhint'
                                                                                                  	/path/to/durden/durden/menus/global/open.lua:80: in function </path/to/durden/durden/menus/global/open.lua:65>
                                                                                                  
                                                                                                  
                                                                                                  Handing over to recovery script (or shutdown if none present).
                                                                                                  Lua VM failed with no fallback defined, (see -b arg).
                                                                                                  
                                                                                                  Debug window

                                                                                                  I did get target->video->advanced->debug window to run though.

                                                                                                  1. 2

                                                                                                    I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                                                                                    A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                                                                                    The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                                                                                    1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                                                                                    2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                                                                                    3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                                                                                    Think I’ve found the scripting error, testing when I’m back home - thanks.

                                                                                                    The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                                                                                                    1. 1

                                                                                                      Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts.

                                                                                                      I’m still stumbling around a bit, though that’s what I intended to do.

                                                                                                      So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                                                                                      Is there a way to view/send things through the IPC?

                                                                                                      From the wiki:

                                                                                                      The delta distance feature is primarily useful for polling sources, like the mem-sense with a refresh clock. The screenshot below shows the alpha window picking up on a changing byte sequence that would be hard to spot with other settings.

                                                                                                      Didn’t quite understand this example. Mem diff seems interesting in general.

                                                                                                      For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                                                                                      From another part of the wiki

                                                                                                      and the distinct pattern in the point cloud hints that we are dealing with some ASCII text.

                                                                                                      This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                                                                                      Minor questions/remark

                                                                                                      Not urgent in any way

                                                                                                      • Is there a way to start the process as a child so ./sense_mem needs less permissions?
                                                                                                      • Is there a way to view registers?
                                                                                                      Compiling

                                                                                                      Compiling senseye without installing Arcan with cmake -DARCAN_SOURCE_DIR= still gives errors.

                                                                                                      I think the first error was about undefined symbols that were in platform/platform.h (arcan_aobj_id and arcan_vobj_id).

                                                                                                      I can try to get the actual error message again if that’s useful.

                                                                                                      1. 2

                                                                                                        Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts. I’m still stumbling around a bit, though that’s what I intended to do.

                                                                                                        From the state you’re seeing it, it is very much a research project hacked together while waiting at airports :-) I’ve accumulated enough of a idea to distill it into something more practically thought together - but not there quite yet.

                                                                                                        Is there a way to view/send things through the IPC?

                                                                                                        At the time it was written, I had just started to play with that (if you see the presentation slides, that’s the fuzzing bit, the actual sending works very much like a clipboard paste operation), the features are in the IPC system now, not mapped into the sensors though.

                                                                                                        So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                                                                                        yeah, sense_mem was just getting the whole “what does it take to sample / observe process memory without poking it with ptrace etc. Those controls and some other techniques are intended to be bootstrapped via the whole ipc-system in the way I talked about earlier. That should kill the privilege problem as well.

                                                                                                        Didn’t quite understand this example. Mem diff seems interesting in general.

                                                                                                        The context menu for a data window should have a refresh clock option. If that’s activated, it will re-sample the current page and mark which bytes changed. Then the UI/shader for alpha window should show which bytes those are.

                                                                                                        For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                                                                                        The intended workflow was something like “dig around in memory, look at projections or use the other searching tools to find data of interest” -> attach translators -> get symbolic /metadata overview.

                                                                                                        and the distinct pattern in the point cloud hints that we are dealing with some ASCII text. This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                                                                                        See the linked videos on “voyage of the reverse” and the recon 2014 video of “cantor dust”, i.e. a feedback loop of projections + training + experimentation. The translators was the tool intended to make the latter stage easier.

                                                                                                    2. 1

                                                                                                      I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                                                                                      A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                                                                                      The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                                                                                      1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                                                                                      2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                                                                                      3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                                                                                      Think I’ve found the scripting error, testing when I’m back home - thanks.

                                                                                                      The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                                                                                  3. 3

                                                                                    If you are looking for references on debuggers then the book How Debuggers Work may be helpful.

                                                                                  1. 2

                                                                                    missing (and own horn tooting): https://github.com/letoram/senseye/wiki

                                                                                    1. 6

                                                                                      Having gone through the process multiple times in my more formative of years I can only chime in and say that the value of this exercise can’t be understated - a lot of computing unfolds the more you do it ‘raw’ (including reversing) and the deeper you dive (everything is buggy, the bugs need to be discovered, replicated and timing is a bitch) the more you get. It’s the perfect area for training reverse engineering, cracking, …

                                                                                      See also: https://patpend.net/articles/ar/aev021.txt

                                                                                      1. 4

                                                                                        FWIW - though I rarely agree with the decisions in libinput on this matter or others, who-t deserves praise for the work, rigour and analysis done here.

                                                                                        That said, there’s some kind of “facepalm” to be had in that there’s more effort and real engineering being put into assuring a physical mapping between mouse samplerate/sensor resolution and physical travel than it is in other parts of the display stack (big topic, but it ties into mixed-DPI output and Waylands quite frankly retarded solution with using buffer scale factors).

                                                                                        My personal opinion / experience is that acceleration is the wrong solution for the problem - and there’s a draft in my article pile digging into ‘why’. The problem being that with big / multiple screens, the travel time (=effort providing the input) for moving the mouse cursor between different targets (windows), warrants breaking linearity for.

                                                                                        An alternative is what I added in durden for keyboard-dominant window management schemes, where the WM remembers position per window and ‘warps’ when your keyboard bindings change the selected window.

                                                                                        For stacking/floating, get an eye tracker(!) and let eye gaze region determine cursor start position (bias with a sobel filter and contrast within that region) and mouse motion set linear delta from that. Nobody has done that yet though ;-)