1. 7

    So hyped! This year I’m gonna attack each problem in three languages: Haskell (as always), Rust (new last year) and Racket (can’t go a year without Lisp!).

    1. 11

      This week I’m preparing to launch Octobox on the GitHub Marketplace and start building a sustainable business around the project.

      1. 3

        Just wanted to say that - Octobox is fantastic!

        1. 3

          Very exciting! Good luck :)

          1. 1
          1. 6

            Working on the Rust compiler in my freetime. There’s so much work to be done with regards to inline assembly. I’m looking into spending more time with Zig, Nim & Crystal as well hopefully.

            1. 1

              I always found it a little sketchy that companies pay you based on your location. Aren’t they supposed to pay for my time and skills? What does where I am have anything to do with that?

              1. 10

                Because a dollar in Omaha goes a lot further than a dollar in London? If the company can’t move, or you can’t move, how else are they going to bridge that gap?

                1. 2

                  how else are they going to bridge that gap?

                  With money. :)

                  1. 5

                    /taps nose

                2. 10

                  Supply and demand, innit? If they could get away with paying a midwest salary in california they would, but then they wouldn’t have anyone to hire.

                  1. 6

                    Because 100k in SF is not the same as 100k in Miami. Not only taxes are completely different, the cost of living is too. Your skills will be evaluated as well but so will the cost of living and the income tax or lack of it.

                    1. 2

                      People often think that the cost of living is what drives the difference in pay across locations but that’s merely a secondary influence. The primary drivers is supply & demand. Consider NYC & London: the former both pays more and has a lower cost of living than the latter, even though both are tier 1 global cities.

                    2. 2

                      That might start go go away if remote work becomes more common. Makes no sense for every developer in the world to pile in on one city when they could do the same thing from anywhere.

                      1. 2

                        Most salaries are a combination of what your skills are worth and the amount of disposable income you want. The first is independent of location, while the other is almost completely determined by it. Sure, you can live frugally and save a lot, but most people probably spend more when basic costs (housing, food) increase.

                        Salaries in San Francisco can be crazy high compared to other places, but cost of living is also substantially higher. Your net disposable income can go up if you move, even if your salary goes down as well. This is also why working remotely can be a big differentiator if you are working for a company located in a high-cost area (SF, New York, London, etc.) and you yourself live in a cheap place.

                        1. 2

                          How would you do it differently? Specifically I think the case they’re solving for is a worker on the west coast wanting to work remotely/from the remote office of a company from the Midwest, for example.

                          1. 1

                            Keep labor costs down primarily. They pay you as little as they can get away with. There’s exceptions but that’s the rule.

                          1. 7

                            I’m planning to wipe Windows (along with my Linux VM dev setup) on my Thinkpad P71 and then install NixOS as the only operating system.

                            1. 3

                              Sounds fun! Have you used Linux as a full desktop OS before? Curious to know why your choice has fallen on NixOS. (I, myself, have been thinking of switching from Arch to NixOS)

                              1. 4

                                I use it to develop web and mobile apps using Haskell. i3 is my preferred window manager (I don’t use desktop managers). Here’s a screenshot.

                                Curious to know why your choice has fallen on NixOS

                                I can declaratively configure everything (packages, services, system configuration, kernel version, display drivers, etc.) and use that reproduce the exact instance anytime I want. Here’s the configuration I use for my Thinkpad P71.

                            1. 3
                              • Working on better inline assembly support in rustc.
                                • Already fixed some smaller issues, but we are getting ready for the Inline Assembly RFC to hit the rfcs repo soon.
                              • Hacking on my control group crate for Rust.
                              • Hopefully spending the Sunday working on my super secret project in Rust.
                              • I might finish some stuff that I didn’t finish over the week for $work. Generally to avoid burnout and the associated concerns I avoid working on $work stuff over the weekend, but I’m super excited for what we can accomplish so I might give it a few more hours.

                              Oh and correct a few mistakes in my Operating System Development for the HiFive-1 RISC-V board article!

                              1. 2

                                Interesting, I switched to MegaSync after them announcing dropping support for filesystems other than “vanilla unencrypted” ext4 due to extended attributes. (Frankly, it didn’t make sense when I read it and it still doesn’t seem to make sense to me)

                                Now that this might be a workaround, if it works seamlessly, (I mostly store PDFs, so I guess transfer speed et al are OK as long as its reasonable) I might give them another shot.

                                Cool project!

                                1. 2

                                  Unsure to be honest, nothing specific planned. I guess I should get some head on writing blog posts and some Rust stuff. But most importantly, rest. Past two weeks have been way too hectic and I’ve neglected important things.

                                  1. 6

                                    Bit banging Ethernet on a RISC-V board in Rust. Can I get more hipster than this?

                                    1. 1

                                      That’s pretty hipster. I think the only thing that tops it is building the Ethernet and RISC-V in your Novena using Qflow. ;)

                                    1. 7

                                      Finishing a Bare Bones wiki page for OSDev about the HiFive-1 RISC-V board!

                                      1. 4

                                        More Elixir learning / hacking! Anyone else doing this?

                                        I am thinking about a project that could map well to it. But I am very far away from even putting together a simple application. Learning Elixir, Phoenix and Ecto.

                                        Also, family time :-)

                                        1. 1

                                          I also occasionally spend time learning Elixir! Would love to collaborate on some hack!

                                        1. 2

                                          This is fantastic, so much great content. Appreciate the work and effort you put into this!

                                          P.S.: Is there an RSS feed for the Planet?

                                          1. 2

                                            There is - viewing the source will give the URL, but otherwise it’s https://crustaceans.hmmz.org/rss20.xml.

                                          1. 10

                                            The security researcher also recommended we consider using GPG signing for Homebrew/homebrew-core. The Homebrew project leadership committee took a vote on this and it was rejected non-unanimously due to workflow concerns.

                                            This is incredibly sad and makes me wonder what part of the workflow would have been impacted. Git automatically signs the commits I make for me once I have entered my password once, thanks to gpg-agent.

                                            1. 3

                                              They have a bot which commits hashes for updated binary artifacts. If all commits needed to be signed, it’d need an active key, and now you have a GPG key on the Jenkins server, leaving you no better off.

                                              1. 2

                                                But gpg cannot work with multiple smartcards at the same time, so maybe that’s a reason for some people. Either way there are simpler ways to deal with signing than gpg

                                                1. 1

                                                  GPG signing wouldn’t have fixed this vulnerability as such, since presumably the same people not thinking about the visibility of the bot’s token would have equally failed to think about the visibility of the bot’s hypothetical private key

                                                1. 4

                                                  Plain old git log and git reflog, really. Sometimes I use the webui on GitHub or GitLab, but they are quite clunky.

                                                  1. 5

                                                    Finished my thesis and submitted it. Tomorrow’s the presentation, so that pretty much occupies me.

                                                    Once that is done, a break. Whether it’s an actual, proper break or just a break to do sideprojects, I am not sure yet.

                                                    1. 3

                                                      Mostly going to work on finishing up my thesis, and then I’ll be working on a new OS with extreme attention to scalability over multiple cores.

                                                      Other than this, I’ve been spending some time on the Crystal language. It’s quite niche, can’t wait for it to have support for parallelism.

                                                      1. 11

                                                        I reiterate the request to add OS tag, 1 2 3.

                                                        1. 4

                                                          I think it should be a systems tag to clarify it’s not about anything OS related (lest they tag like Windows posts) but rather, systems development, not just kernel/OS stuff, but drivers and some embedded too.

                                                          1. 3

                                                            os-dev?

                                                          2. 2

                                                            Second. We need a tag!

                                                            1. 1

                                                              Actually, whenever I post something about Jehanne, I feel the same need.

                                                            1. 4

                                                              I’m trying to hack together a “git symlink as a service” product. Ideally, it’s a simple URL that should not change in the future, and resolves to a git repository hosted somewhere on the internet. In light of GitHub acquisition, this could be handy in avoiding things like this in the future.

                                                              1. 5

                                                                Last week

                                                                • Fiddled with different ways of attaching to processes and viewing their states.
                                                                • Some other technical stuff that went well

                                                                This was for the low level debugger I’m trying to make.

                                                                So, from what I’ve read and seen, tools that attach and inspect other process tend to just use gdb under the hood. I was hoping for a more minimal debugger to read and copy.

                                                                lldb almost does what I need because of its existing external Python interface but documentation for writing a stand-alone tool (started from outside the debugger rather than inside) is scattered. I haven’t managed to make it single step.

                                                                Using raw ptrace and trying to read the right memory locations seems difficult because of things like address randomization. And getting more information involves working with even more memory mapping and other conventions.

                                                                I wish all these conventions were written in some machine readable language agnostic way so I don’t have to human-read each one and try to implement it. Right now this is all implicit in the source code of something like gdb. This is a lot of extra complexity which has nothing to do with what I’m actually trying to accomplish.

                                                                The raw ptrace approach would also likely only work for Linux. And possibly strong tied to C or assembly.

                                                                The problem with the latter is that eventually I will want to do this to interpreters written in C or even interpreters written in interpreters written in C. Seems like even more incidental complexity in that way.

                                                                An alternative is to log everything and have a much fancier log viewer after the fact. This way the debugged program only need to emit the right things to a file or stdout. But this loses the possibility for any interactivity.

                                                                Plus, all of this would only be worth it if I can get some state visualization customizable to that specific program (because usually it will be an interpreter).

                                                                Other questions: How to avoid duplicating the work when performing operations from “inside the program” and from “outside” through the eventual debugger?

                                                                Other ideas: Try to do this with a simpler toy language/system to get an idea of how well using such a workflow would work in the first place.

                                                                Some references

                                                                This week

                                                                • Well, now that I have a better idea of how deep this rabbit hole is, I need to decide what to do. Deciding is much harder than programming…
                                                                • Or maybe I should do one of the other thousand things I want to and have this bit of indecision linger some more.
                                                                1. 5

                                                                  I wrote a very simple PoC debugger in Rust if you are interested in the very basics: https://github.com/levex/debugger-talk

                                                                  It uses ptrace(2) under the hood, as you would expect.

                                                                  1. 1

                                                                    Thanks! I’ve a had a look at your slide and skimmed some of your code (don’t have Rust installed or running would be the first thing I’d do).

                                                                    I see that you’re setting breakpoints by address. How do you figure out the address at which you want to set a breakpoint though?

                                                                    How long did it take to make this? And can you comment on how hard it would be to continue from this point on? For example reading C variables and arrays? Or getting line numbers from the call stack?

                                                                    1. 2

                                                                      Hey, sorry for the late reply!

                                                                      In the talk I was setting breakpoints by address indeed. This is because the talked focused on the lower-level parts To translate line numbers into addresses and vice-versa you need access to the “debug information”. This is usually stored in the executable (as decribed by the DWARF file format). There are libraries that can help you with this (just as the disassembly is done by an excellent library instead of my own code).

                                                                      This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                                                                      Reading C variables is already done :-), reading arrays is just a matter of a new command and reading variables sequentially.

                                                                      1. 1

                                                                        Thanks for coming back to answer! Thanks to examples from yourself and others I did get some stuff working (at least on the examples I tried) like breakpoint setting/clearing, variable read/write and simple function calls.

                                                                        Some things from the standards/formats are still unclear, like why I only need to add the start of the memory region extracted from /proc/pid/maps if its not 0x400000.

                                                                        This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                                                                        A week doesn’t sound too bad. Unfortunately, I’m in the opposite situation using a familiar system to do something unfamiliar.

                                                                        1. 2

                                                                          I think that may have to do with whether the executable you are “tracing” is a PIE (Position-Independent Executable) or not.

                                                                          Good luck with your project, learning how debuggers work by writing a simple one teaches you a lot.

                                                                      2. 2

                                                                        For C/assembly (and I’ll assume a modern Unix system) you’ll need to read up on ELF (object and executable formats) and DWARF (debugging records in an ELF file) that contain all that information. You might also want to look into the GDB remote serial protocol (I know it exists, but I haven’t looked much into it).

                                                                        1. 1

                                                                          Well, I got some addresses out of nm ./name-of-executable but can’t peek at those directly. Probably need an offset of some sort?

                                                                          There’s also dwarfdump I haven’t tried yet. I’ll worry about how to get this info from inside my tracer a bit later.

                                                                          Edit: Nevermind, it might have just been the library I’m using. Seems like I don’t need an offset at all.

                                                                          1. 2

                                                                            I might have missed some other post, but is there a bigger writeup on this project of yours? As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                                                                            1. 1

                                                                              I might have missed some other post, but is there a bigger writeup on this project of yours?

                                                                              I’m afraid not, at least for the debugger subproject. This is the context. The debugger would fit in two ways:

                                                                              • Since I have a GUI maker, I can try to use it to make a graphical debugger. (Ideally, allowing custom visualizations created for each new debugging task.)
                                                                              • A debugger/editor would be useful for making and editing [Flpc]((github.com/asrp/flpc) or similar. I want to be able to quickly customize the debugger to also be usable as an external Flpc debugger (instead of just a C debugger). In fact, it’d be nice if I could evolve the debugger and target (=interpreter) simultaneously.

                                                                              Although I’m mostly thinking of using it for the earlier stages of development. Even though I should already be past that stage, if I can (re)make that quickly, I’ll be more inclined to try out major architectural changes. And also add more functionality in C more easily.

                                                                              Ideally, the debugger would also be an editor (write a few instructions, set SIGTRAP, run, write a few more instructions, etc; write some other values to memory here and there). But maybe this is much more trouble than its worth.

                                                                              Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                                                              As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                                                                              I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                                                              Hopefully I will remember this link if I need something like that later on.

                                                                              1. 2

                                                                                I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                                                                My recommendation, coolness aside, for the ECFS part is that Ryan is pretty darn good with the ugly details of ELF and his code and texts are valuable sources of information on otherwise undocumented quirks.

                                                                                Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                                                                I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features. The cases I use it for on about a weekly basis touch upon materials that are NDAd.

                                                                                I have a blogpost coming up on how the full stack itself map into debugging and what the full stack is building towards, but the short short (yet long, sorry for that, the best I could do at the moment) version:

                                                                                Ingredients:

                                                                                Arcan is a display server - a poor word for output control, rendering and desktop IPC subsystem. The IPC subsystem is referred to as SHMIF. It also comes with a mid level client API: TUI which roughly correlates to ncurses, but with more desktop:y featureset and sidesteps terminal protocols for better window manager integration.

                                                                                The SHMIF IPC part that is similar to a ‘Window’ in X is referred to as a segment. It is a typed container comprised of one big block (video frame), a number of small chunked blocks (audio frames), two ring buffers as input/output queue that carry events and file descriptors.

                                                                                Durden act a window manager (Meta-UI).This mostly means input mapping, configuration tracking, interactive data routing and window layouting.

                                                                                Senseye comes in three parts. The data providers, sensors, that have some means of sampling with basic statistics (memory, file, ..) which gets forwarded over SHMIF to Durden. The second part is analysis and visualization scripts built on the scripting API in Arcan. Lastly there are translators that are one-off parsers that take some incoming data from SHMIF, parses it and renders some human- useful human- level output, optionally annotated with parsing state metadata.

                                                                                Recipe:

                                                                                A client gets a segment on connection, and can request additional ones. But the more interesting scenario is that the WM (durden in this case) can push a segment as a means of saying ‘take this, I want you to do something with it’ and the type is a mapping to whatever UI policy that the WM cares about.

                                                                                One such type is Debug. If a client maps this segment, it is expected to populate it with whatever debugging/troubleshooting information that the developer deemed relevant. This is the cooperative stage, it can be activated and deactivated at runtime without messing with STDERR and we can stop with the printf() crap.

                                                                                The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead. Like provide a default debug interface preparing the process to attach a debugger, or activate one of those senseye sensors, or …

                                                                                Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                                                                Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                                                                So what I’m looking into right now is writing the “fallback” debug interface, with some nice basics, like stderr redirect, file descriptor interception and buffer editing, and a TUI for lldb to go with it ;-)

                                                                                The long term goal for all this is “every byte explained”, be able to take something large (web browser or so) and have the tools to sample, analyse, visualise and intercept everything - show that the executing runtime is much more interesting than trivial artefacts like source code.

                                                                                1. 1

                                                                                  Thanks! After reading this reply, I’ve skimmed your lastest post submitted here and on HN. I’ve added it to my reading list to considered more carefully later.

                                                                                  I don’t fully understand everything yet but get the gist of it for a number of pieces.

                                                                                  I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features.

                                                                                  Thanks, this gives me a better understanding. I wouldn’t minding seeing more examples like this, even if contrived.

                                                                                  In my case I’m not (usually) manipulating (literal) images or video/audio streams though. Do you think your project would be very helpful for program state and execution visualization? I’m thinking of something like Online Python Tutor. (Its sources is available but unfortunately everything is mixed together and its not easy to just extract the visualization portion. Plus, I need it to be more extensible.)

                                                                                  For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                                                                  The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead.

                                                                                  Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                                                                  Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                                                                  Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                                                                  Very nice! Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                                                                  1. 3

                                                                                    Just poke me if you need further clarification.

                                                                                    For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                                                                    The real tool is highly interactive, it’s the basic mode of operation, it’s just the UI that sucks and that’s why it’s being replaced with Durden that’s been my desktop for a while now. This video shows a more interactive side: https://www.youtube.com/watch?v=WBsv9IJpkDw Including live sampling of memory pages (somewhere around 3 minutes in).

                                                                                    Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                                                                    It needs both, I have a big bag of tricks for the ‘in process’ part, and with YAMA and other restrictions on ptrace these days the process needs some massage to be ‘external debugger’ ready. Though some default of “immediately do this” will likely be possible.

                                                                                    I’ve so far just thought about it interactively with the sortof goal that it should be, at most, 2-3 keypresses from having a window selected to be digging around inside it’s related process no matter what you want to measure or observe. https://github.com/letoram/arcan/blob/master/src/shmif/arcan_shmif_debugif.c ) not finished by any stretch binds the debug window to the TUI API and will present a menu.

                                                                                    Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                                                                    Exactly.

                                                                                    1. 2

                                                                                      Thanks. So I looked a bit more into this.

                                                                                      I think the most interesting part for me at the moment is the disassembly.

                                                                                      I tried to build it just to see. I eventually followed these instructions but can’t find any Senseye related commands in any menu in Durden (global or target).

                                                                                      I think I managed to build senseye/senses correctly.

                                                                                      Nothing obvious stands out in tools. I tried both symlinks

                                                                                      /path/to/durden/durden/tools/senseye/senseye
                                                                                      /path/to/durden/durden/tools/senseye/senseye.lua
                                                                                      

                                                                                      and

                                                                                      /path/to/durden/durden/tools/senseye
                                                                                      /path/to/durden/durden/tools/senseye.lua
                                                                                      

                                                                                      Here are some other notes on the build process

                                                                                      Libdrm

                                                                                      On my system, the include -I/usr/include/libdrm and linker flag -ldrm are needed. I don’t know cmake so don’t know where to add them. (I manually edited and ran the commands make VERBOSE=1 was running to get around this.)

                                                                                      I had to replace some CODEC_* with AV_CODEC_*

                                                                                      Durden

                                                                                      Initially Durden without -p /path/to/resources would not start saying some things are broken. I can’t reproduce it anymore.

                                                                                      Senseye
                                                                                      cmake -DARCAN_SOURCE_DIR=/path/to/src ../senses
                                                                                      

                                                                                      complains about ARCAN_TUI_INCLUDE_DIR and ARCAN_TUI_LIBRARY being not found:

                                                                                      Make Error: The following variables are used in this project, but they are set to NOTFOUND.
                                                                                      Please set them or make sure they are set and tested correctly in the CMake files:
                                                                                      ARCAN_TUI_INCLUDE_DIR
                                                                                      
                                                                                      Capstone

                                                                                      I eventually installed Arcan instead of just having it built and reached this error

                                                                                      No rule to make target 'capstone/lib/libcapstone.a', needed by 'xlt_capstone'.
                                                                                      

                                                                                      I symlinked capstone/lib64 to capstone/lib to get around this.

                                                                                      Odd crashes

                                                                                      Sometimes, Durden crashed (or at least exited without notice) like when I tried changing resolution from inside.

                                                                                      Here’s an example:

                                                                                      Improper API use from Lua script:
                                                                                      	target_disphint(798, -2147483648), display dimensions must be >= 0
                                                                                      stack traceback:
                                                                                      	[C]: in function 'target_displayhint'
                                                                                      	/path/to/durden/durden/menus/global/open.lua:80: in function </path/to/durden/durden/menus/global/open.lua:65>
                                                                                      
                                                                                      
                                                                                      Handing over to recovery script (or shutdown if none present).
                                                                                      Lua VM failed with no fallback defined, (see -b arg).
                                                                                      
                                                                                      Debug window

                                                                                      I did get target->video->advanced->debug window to run though.

                                                                                      1. 2

                                                                                        I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                                                                        A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                                                                        The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                                                                        1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                                                                        2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                                                                        3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                                                                        Think I’ve found the scripting error, testing when I’m back home - thanks.

                                                                                        The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                                                                                        1. 1

                                                                                          Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts.

                                                                                          I’m still stumbling around a bit, though that’s what I intended to do.

                                                                                          So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                                                                          Is there a way to view/send things through the IPC?

                                                                                          From the wiki:

                                                                                          The delta distance feature is primarily useful for polling sources, like the mem-sense with a refresh clock. The screenshot below shows the alpha window picking up on a changing byte sequence that would be hard to spot with other settings.

                                                                                          Didn’t quite understand this example. Mem diff seems interesting in general.

                                                                                          For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                                                                          From another part of the wiki

                                                                                          and the distinct pattern in the point cloud hints that we are dealing with some ASCII text.

                                                                                          This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                                                                          Minor questions/remark

                                                                                          Not urgent in any way

                                                                                          • Is there a way to start the process as a child so ./sense_mem needs less permissions?
                                                                                          • Is there a way to view registers?
                                                                                          Compiling

                                                                                          Compiling senseye without installing Arcan with cmake -DARCAN_SOURCE_DIR= still gives errors.

                                                                                          I think the first error was about undefined symbols that were in platform/platform.h (arcan_aobj_id and arcan_vobj_id).

                                                                                          I can try to get the actual error message again if that’s useful.

                                                                                          1. 2

                                                                                            Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts. I’m still stumbling around a bit, though that’s what I intended to do.

                                                                                            From the state you’re seeing it, it is very much a research project hacked together while waiting at airports :-) I’ve accumulated enough of a idea to distill it into something more practically thought together - but not there quite yet.

                                                                                            Is there a way to view/send things through the IPC?

                                                                                            At the time it was written, I had just started to play with that (if you see the presentation slides, that’s the fuzzing bit, the actual sending works very much like a clipboard paste operation), the features are in the IPC system now, not mapped into the sensors though.

                                                                                            So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                                                                            yeah, sense_mem was just getting the whole “what does it take to sample / observe process memory without poking it with ptrace etc. Those controls and some other techniques are intended to be bootstrapped via the whole ipc-system in the way I talked about earlier. That should kill the privilege problem as well.

                                                                                            Didn’t quite understand this example. Mem diff seems interesting in general.

                                                                                            The context menu for a data window should have a refresh clock option. If that’s activated, it will re-sample the current page and mark which bytes changed. Then the UI/shader for alpha window should show which bytes those are.

                                                                                            For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                                                                            The intended workflow was something like “dig around in memory, look at projections or use the other searching tools to find data of interest” -> attach translators -> get symbolic /metadata overview.

                                                                                            and the distinct pattern in the point cloud hints that we are dealing with some ASCII text. This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                                                                            See the linked videos on “voyage of the reverse” and the recon 2014 video of “cantor dust”, i.e. a feedback loop of projections + training + experimentation. The translators was the tool intended to make the latter stage easier.

                                                                                        2. 1

                                                                                          I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                                                                          A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                                                                          The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                                                                          1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                                                                          2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                                                                          3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                                                                          Think I’ve found the scripting error, testing when I’m back home - thanks.

                                                                                          The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                                                                      3. 3

                                                                        If you are looking for references on debuggers then the book How Debuggers Work may be helpful.

                                                                      1. 5

                                                                        Finally a redesign! Looks great, too.

                                                                        1. 5

                                                                          Looks an awful lot like Slack with a different colour scheme.

                                                                          1. 1

                                                                            A different color scheme and a governing body that isn’t hell-bent on destroying anything standing in their way, I guess.

                                                                            1. 1

                                                                              Sure, I’m not exactly in love with Slack either. My point still stands: calling it a redesign is a bit strong.