1. 27

This is the weekly thread to discuss what you have done recently and are working on this week.

Please be descriptive and don’t hesitate to champion your accomplishments or ask for help, advice or other guidance.

  1.  

  2. 14

    Making the best of my paternity leave and starting a 2 months bike trip with the whole family.

    See you all in August :-)

    1. 5

      Paternity leave is great.

    2. 10

      Continuing to work hard every day on my newsletter Morning Cup of Coding. I thought that a newsletter would only be reading and curating articles, but it’s almost a full day job what with replying to emails, fixing and automating workflows, taking action on feedback, etc, etc. The very positive reviews make it worth it though <3

      Plus, I’m very happy that I am in talks with three authors who’s work I really admire to collaborate with me. If anybody knows someone that would be interested in a collaboration I would love to get in contact with them.

      1. 3

        Thanks for posting Morning Cup of Coding looks awesome - I just subscribed.

        1. 1

          Thank you for subscribing!

      2. 9

        I just uploaded the complete first draft of Practical TLA+.

        Going to bike for a while. Then back to work on revisions.

        1. 3

          For those vaguely interested like me: hwayne seems to be the author of the great website https://learntla.com/ and currently in the process of writing a book, Practical TLA+ with Apress. https://twitter.com/Hillelogram/status/950874673994137600

          Thanks for your work!

        2. 7

          BASIC programming and 6502 assembly, possibly with some COBOL.

          I’m nearing the point where I can start writing my next debugging book review which is themed around old-school debugging. I’ve read four books and have another couple in the pipeline. The advice in the books is, unsurprisingly, not totally applicable (although I love the hint for adding debugging code to your card stack: use different coloured cards for your debugging instructions!). So to try to understand the advice a bit better, I’ve been doing some old-school programming. BASIC is the easiest to work with since mainframe emulation is not a great time.

          That said, if anyone knows a decent mainframe emulator for 70s architectures, I’d be happy to check it out.

          1. 5

            SIMH and Hercules are the two main options I’m aware of for mainframe and older architecture emulators. I’ve run Tops-20 on simh before.

          2. 6

            I’m working with my supervisor and two of her other students on a paper for the upcoming MODRE 2018 workshop, comparing TLA+, B/Event-B, and Dash for declarative modelling and model checking.

            Here’s the abstract from the Dash paper I linked above:

            We present DASH, a new language for describing formal behavioural models of requirements. DASH combines the ability to write abstract, declarative transitions (as in Z or Alloy) with a labelled control state hierarchy (as in the Statecharts family of languages). The key contribution of DASH is the combination of explicit support for user-level abstractions that create and factor sets of transitions, such as state hierarchy, and the use of full first-order logic to describe the transitions.

            I’ve had past experience with TLA+ and Event-B, and am focused mainly on writing about TLA+ in the paper. I can say that lack of a proper type system in TLA+ is very noticeable and at times very annoying. But Event-B’s simple type system has its own limitations as well. Regardless, both are very useful tools for modelling and writing specifications.

            1. 3

              That sounds neat. I look forward to reading the paper.

              I’m curious why Spin isn’t in there, though. It has had more industrial success than any other model-checker of its type. At least based on what I read. They used it especially on protocols and hardware. One of few comparisons between TLA+ and Spin had Spin’s analysis hit several times the depth in states that TLA+ did. In that case at that time. Having lots of users and positive results, it seems like Spin should be the default that all the others are compared to. If anything is better, then Spin users seeing that might be early adopters of the new method. Spin itself might adopt new techniques if they’re general for model-checking.

              Far as types in TLA+, I did submit this previously that tried to address it. Is that the kind of thing you’re looking for or something else?

              1. 2

                Thanks for the comment and kind words nick.

                As for why Spin isn’t there, mainly because as far as I’m aware, none of us students has had much experience with it, and since we were short on time for this deadline, we each decided to write about the tool we’re most familiar with. But thanks for mentioning/reminding me of it. I’ll bring it up with the rest of the group as well.

                As for types in TLA+, thanks for the link. I’d seen that paper by Merz before, but unfortunately haven’t gotten a chance to read it yet. It’s pretty high on my reading list though, especially since I was thinking of trying to write a type system for TLA+ the other day.

                1. 1

                  That makes sense. I can dig you up the comparison or some case studies on ig if you want. You can find a lot of that in ACM and IEEE, too.

                  1. 2

                    Thanks, I’d very much appreciate that :)

            2. 6

              Personal - ILTA is a volunteer organization for legal IT staff. Most law firms use similar technology stacks and we face a lot of the same issues. Last week an ILTA member at another firm asked about adding an autocorrection for an attorney’s name to all of their software systems. I have used the Windows Spell Checking API before so I wrote a command line program for adding autocorrections to the Windows dictionary. I also cleaned up my Github profile by deleting obsolete forks.

              Work - I spent most of this week on user support requests.

              1. 5

                We had nearly 200 submissions for 44CON this year, and as I run the CFP I decided that I’d write back to each submitter individually with feedback. I’ve got about 50 emails to go, and it’s been gruelling but it’s worth it. Everyone who submits a talk deserves thanks, honest feedback and some constructive tips.

                I’ll also be at WarCON later this week, so if anyone in Warsaw wants to hang out, say hello.

                1. 1

                  Now that I know about WarCON, I may arrange my next visit to Warsaw (probably a couple years away) to align with it, if possible.

                2. 5

                  Last week

                  • Fiddled with different ways of attaching to processes and viewing their states.
                  • Some other technical stuff that went well

                  This was for the low level debugger I’m trying to make.

                  So, from what I’ve read and seen, tools that attach and inspect other process tend to just use gdb under the hood. I was hoping for a more minimal debugger to read and copy.

                  lldb almost does what I need because of its existing external Python interface but documentation for writing a stand-alone tool (started from outside the debugger rather than inside) is scattered. I haven’t managed to make it single step.

                  Using raw ptrace and trying to read the right memory locations seems difficult because of things like address randomization. And getting more information involves working with even more memory mapping and other conventions.

                  I wish all these conventions were written in some machine readable language agnostic way so I don’t have to human-read each one and try to implement it. Right now this is all implicit in the source code of something like gdb. This is a lot of extra complexity which has nothing to do with what I’m actually trying to accomplish.

                  The raw ptrace approach would also likely only work for Linux. And possibly strong tied to C or assembly.

                  The problem with the latter is that eventually I will want to do this to interpreters written in C or even interpreters written in interpreters written in C. Seems like even more incidental complexity in that way.

                  An alternative is to log everything and have a much fancier log viewer after the fact. This way the debugged program only need to emit the right things to a file or stdout. But this loses the possibility for any interactivity.

                  Plus, all of this would only be worth it if I can get some state visualization customizable to that specific program (because usually it will be an interpreter).

                  Other questions: How to avoid duplicating the work when performing operations from “inside the program” and from “outside” through the eventual debugger?

                  Other ideas: Try to do this with a simpler toy language/system to get an idea of how well using such a workflow would work in the first place.

                  Some references

                  This week

                  • Well, now that I have a better idea of how deep this rabbit hole is, I need to decide what to do. Deciding is much harder than programming…
                  • Or maybe I should do one of the other thousand things I want to and have this bit of indecision linger some more.
                  1. 5

                    I wrote a very simple PoC debugger in Rust if you are interested in the very basics: https://github.com/levex/debugger-talk

                    It uses ptrace(2) under the hood, as you would expect.

                    1. 1

                      Thanks! I’ve a had a look at your slide and skimmed some of your code (don’t have Rust installed or running would be the first thing I’d do).

                      I see that you’re setting breakpoints by address. How do you figure out the address at which you want to set a breakpoint though?

                      How long did it take to make this? And can you comment on how hard it would be to continue from this point on? For example reading C variables and arrays? Or getting line numbers from the call stack?

                      1. 2

                        Hey, sorry for the late reply!

                        In the talk I was setting breakpoints by address indeed. This is because the talked focused on the lower-level parts To translate line numbers into addresses and vice-versa you need access to the “debug information”. This is usually stored in the executable (as decribed by the DWARF file format). There are libraries that can help you with this (just as the disassembly is done by an excellent library instead of my own code).

                        This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                        Reading C variables is already done :-), reading arrays is just a matter of a new command and reading variables sequentially.

                        1. 1

                          Thanks for coming back to answer! Thanks to examples from yourself and others I did get some stuff working (at least on the examples I tried) like breakpoint setting/clearing, variable read/write and simple function calls.

                          Some things from the standards/formats are still unclear, like why I only need to add the start of the memory region extracted from /proc/pid/maps if its not 0x400000.

                          This project took about a week of preparation and work. I was familiar with the underlying concepts, however Rust and its ecosystem was a new frontier for me.

                          A week doesn’t sound too bad. Unfortunately, I’m in the opposite situation using a familiar system to do something unfamiliar.

                          1. 2

                            I think that may have to do with whether the executable you are “tracing” is a PIE (Position-Independent Executable) or not.

                            Good luck with your project, learning how debuggers work by writing a simple one teaches you a lot.

                        2. 2

                          For C/assembly (and I’ll assume a modern Unix system) you’ll need to read up on ELF (object and executable formats) and DWARF (debugging records in an ELF file) that contain all that information. You might also want to look into the GDB remote serial protocol (I know it exists, but I haven’t looked much into it).

                          1. 1

                            Well, I got some addresses out of nm ./name-of-executable but can’t peek at those directly. Probably need an offset of some sort?

                            There’s also dwarfdump I haven’t tried yet. I’ll worry about how to get this info from inside my tracer a bit later.

                            Edit: Nevermind, it might have just been the library I’m using. Seems like I don’t need an offset at all.

                            1. 2

                              I might have missed some other post, but is there a bigger writeup on this project of yours? As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                              1. 1

                                I might have missed some other post, but is there a bigger writeup on this project of yours?

                                I’m afraid not, at least for the debugger subproject. This is the context. The debugger would fit in two ways:

                                • Since I have a GUI maker, I can try to use it to make a graphical debugger. (Ideally, allowing custom visualizations created for each new debugging task.)
                                • A debugger/editor would be useful for making and editing [Flpc]((github.com/asrp/flpc) or similar. I want to be able to quickly customize the debugger to also be usable as an external Flpc debugger (instead of just a C debugger). In fact, it’d be nice if I could evolve the debugger and target (=interpreter) simultaneously.

                                Although I’m mostly thinking of using it for the earlier stages of development. Even though I should already be past that stage, if I can (re)make that quickly, I’ll be more inclined to try out major architectural changes. And also add more functionality in C more easily.

                                Ideally, the debugger would also be an editor (write a few instructions, set SIGTRAP, run, write a few more instructions, etc; write some other values to memory here and there). But maybe this is much more trouble than its worth.

                                Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                As to the specifics of digging up such information, take a look at ECFS - https://github.com/elfmaster/ecfs

                                I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                Hopefully I will remember this link if I need something like that later on.

                                1. 2

                                  I have to say, this looks really cool. Although in my case, I’m expecting cooperation from the target being debugged.

                                  My recommendation, coolness aside, for the ECFS part is that Ryan is pretty darn good with the ugly details of ELF and his code and texts are valuable sources of information on otherwise undocumented quirks.

                                  Your senseye program might be relevant depending on how customizable (or live customizable) the UI is. The stack on which its built is completely unknown to me. Do you have videos/posts where you use it to debug and/or find some particular piece of information?

                                  I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features. The cases I use it for on about a weekly basis touch upon materials that are NDAd.

                                  I have a blogpost coming up on how the full stack itself map into debugging and what the full stack is building towards, but the short short (yet long, sorry for that, the best I could do at the moment) version:

                                  Ingredients:

                                  Arcan is a display server - a poor word for output control, rendering and desktop IPC subsystem. The IPC subsystem is referred to as SHMIF. It also comes with a mid level client API: TUI which roughly correlates to ncurses, but with more desktop:y featureset and sidesteps terminal protocols for better window manager integration.

                                  The SHMIF IPC part that is similar to a ‘Window’ in X is referred to as a segment. It is a typed container comprised of one big block (video frame), a number of small chunked blocks (audio frames), two ring buffers as input/output queue that carry events and file descriptors.

                                  Durden act a window manager (Meta-UI).This mostly means input mapping, configuration tracking, interactive data routing and window layouting.

                                  Senseye comes in three parts. The data providers, sensors, that have some means of sampling with basic statistics (memory, file, ..) which gets forwarded over SHMIF to Durden. The second part is analysis and visualization scripts built on the scripting API in Arcan. Lastly there are translators that are one-off parsers that take some incoming data from SHMIF, parses it and renders some human- useful human- level output, optionally annotated with parsing state metadata.

                                  Recipe:

                                  A client gets a segment on connection, and can request additional ones. But the more interesting scenario is that the WM (durden in this case) can push a segment as a means of saying ‘take this, I want you to do something with it’ and the type is a mapping to whatever UI policy that the WM cares about.

                                  One such type is Debug. If a client maps this segment, it is expected to populate it with whatever debugging/troubleshooting information that the developer deemed relevant. This is the cooperative stage, it can be activated and deactivated at runtime without messing with STDERR and we can stop with the printf() crap.

                                  The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead. Like provide a default debug interface preparing the process to attach a debugger, or activate one of those senseye sensors, or …

                                  Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                  Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                  So what I’m looking into right now is writing the “fallback” debug interface, with some nice basics, like stderr redirect, file descriptor interception and buffer editing, and a TUI for lldb to go with it ;-)

                                  The long term goal for all this is “every byte explained”, be able to take something large (web browser or so) and have the tools to sample, analyse, visualise and intercept everything - show that the executing runtime is much more interesting than trivial artefacts like source code.

                                  1. 1

                                    Thanks! After reading this reply, I’ve skimmed your lastest post submitted here and on HN. I’ve added it to my reading list to considered more carefully later.

                                    I don’t fully understand everything yet but get the gist of it for a number of pieces.

                                    I think the only public trace of that is https://arcan-fe.com/2015/05/24/digging-for-pixels/ but it only uses a fraction of the features.

                                    Thanks, this gives me a better understanding. I wouldn’t minding seeing more examples like this, even if contrived.

                                    In my case I’m not (usually) manipulating (literal) images or video/audio streams though. Do you think your project would be very helpful for program state and execution visualization? I’m thinking of something like Online Python Tutor. (Its sources is available but unfortunately everything is mixed together and its not easy to just extract the visualization portion. Plus, I need it to be more extensible.)

                                    For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                    The thing that ties it all together - if a client doesn’t map a segment that was pushed on it, because it doesn’t want to or already have one, the shmif-api library can sneakily map it and do something with it instead.

                                    Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                    Hierarchical dynamic debugging, both cooperative and non-cooperative, bootstrapped by the display server connection - retaining chain of trust without a sudo ptrace side channel.

                                    Here’s a quick PoC recording: https://youtu.be/yBWeQRMvsPc where a terminal emulator (written using TUI) exposes state machine and parsing errors when it receives a “pushed” debug window.

                                    Very nice! Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                    1. 3

                                      Just poke me if you need further clarification.

                                      For example, could you make it so that you could manually view the result for a given user-input width, then display the edges found (either overlayed or separately) and finally after playing around with it a bit (and possibly other objective functions than edges), automatically find the best width as show in the video? (And would this be something that’s easy to do?) Basically, a more interactive workflow.

                                      The real tool is highly interactive, it’s the basic mode of operation, it’s just the UI that sucks and that’s why it’s being replaced with Durden that’s been my desktop for a while now. This video shows a more interactive side: https://www.youtube.com/watch?v=WBsv9IJpkDw Including live sampling of memory pages (somewhere around 3 minutes in).

                                      Maybe this is what you already meant here and by your “fallback debug interface” but how about having a separate process for “sneaky mapping”? So SHMIF remains a “purer” IPC but you can an extra process in the pipeline to do this kind of mapping. (And some separate default/automation can be toggled to have it happen automatically.)

                                      It needs both, I have a big bag of tricks for the ‘in process’ part, and with YAMA and other restrictions on ptrace these days the process needs some massage to be ‘external debugger’ ready. Though some default of “immediately do this” will likely be possible.

                                      I’ve so far just thought about it interactively with the sortof goal that it should be, at most, 2-3 keypresses from having a window selected to be digging around inside it’s related process no matter what you want to measure or observe. https://github.com/letoram/arcan/blob/master/src/shmif/arcan_shmif_debugif.c ) not finished by any stretch binds the debug window to the TUI API and will present a menu.

                                      Assuming I understood correctly, this takes care of the extraction (or in your architecture, push) portion of the debugging

                                      Exactly.

                                      1. 2

                                        Thanks. So I looked a bit more into this.

                                        I think the most interesting part for me at the moment is the disassembly.

                                        I tried to build it just to see. I eventually followed these instructions but can’t find any Senseye related commands in any menu in Durden (global or target).

                                        I think I managed to build senseye/senses correctly.

                                        Nothing obvious stands out in tools. I tried both symlinks

                                        /path/to/durden/durden/tools/senseye/senseye
                                        /path/to/durden/durden/tools/senseye/senseye.lua
                                        

                                        and

                                        /path/to/durden/durden/tools/senseye
                                        /path/to/durden/durden/tools/senseye.lua
                                        

                                        Here are some other notes on the build process

                                        Libdrm

                                        On my system, the include -I/usr/include/libdrm and linker flag -ldrm are needed. I don’t know cmake so don’t know where to add them. (I manually edited and ran the commands make VERBOSE=1 was running to get around this.)

                                        I had to replace some CODEC_* with AV_CODEC_*

                                        Durden

                                        Initially Durden without -p /path/to/resources would not start saying some things are broken. I can’t reproduce it anymore.

                                        Senseye
                                        cmake -DARCAN_SOURCE_DIR=/path/to/src ../senses
                                        

                                        complains about ARCAN_TUI_INCLUDE_DIR and ARCAN_TUI_LIBRARY being not found:

                                        Make Error: The following variables are used in this project, but they are set to NOTFOUND.
                                        Please set them or make sure they are set and tested correctly in the CMake files:
                                        ARCAN_TUI_INCLUDE_DIR
                                        
                                        Capstone

                                        I eventually installed Arcan instead of just having it built and reached this error

                                        No rule to make target 'capstone/lib/libcapstone.a', needed by 'xlt_capstone'.
                                        

                                        I symlinked capstone/lib64 to capstone/lib to get around this.

                                        Odd crashes

                                        Sometimes, Durden crashed (or at least exited without notice) like when I tried changing resolution from inside.

                                        Here’s an example:

                                        Improper API use from Lua script:
                                        	target_disphint(798, -2147483648), display dimensions must be >= 0
                                        stack traceback:
                                        	[C]: in function 'target_displayhint'
                                        	/path/to/durden/durden/menus/global/open.lua:80: in function </path/to/durden/durden/menus/global/open.lua:65>
                                        
                                        
                                        Handing over to recovery script (or shutdown if none present).
                                        Lua VM failed with no fallback defined, (see -b arg).
                                        
                                        Debug window

                                        I did get target->video->advanced->debug window to run though.

                                        1. 2

                                          I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                          A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                          The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                          1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                          2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                          3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                          Think I’ve found the scripting error, testing when I’m back home - thanks.

                                          The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                                          1. 1

                                            Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts.

                                            I’m still stumbling around a bit, though that’s what I intended to do.

                                            So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                            Is there a way to view/send things through the IPC?

                                            From the wiki:

                                            The delta distance feature is primarily useful for polling sources, like the mem-sense with a refresh clock. The screenshot below shows the alpha window picking up on a changing byte sequence that would be hard to spot with other settings.

                                            Didn’t quite understand this example. Mem diff seems interesting in general.

                                            For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                            From another part of the wiki

                                            and the distinct pattern in the point cloud hints that we are dealing with some ASCII text.

                                            This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                            Minor questions/remark

                                            Not urgent in any way

                                            • Is there a way to start the process as a child so ./sense_mem needs less permissions?
                                            • Is there a way to view registers?
                                            Compiling

                                            Compiling senseye without installing Arcan with cmake -DARCAN_SOURCE_DIR= still gives errors.

                                            I think the first error was about undefined symbols that were in platform/platform.h (arcan_aobj_id and arcan_vobj_id).

                                            I can try to get the actual error message again if that’s useful.

                                            1. 2

                                              Thanks, with these instructions I got it compiled and running. I had read the warning in senseye’s readme but forgot about it after compiling the other parts. I’m still stumbling around a bit, though that’s what I intended to do.

                                              From the state you’re seeing it, it is very much a research project hacked together while waiting at airports :-) I’ve accumulated enough of a idea to distill it into something more practically thought together - but not there quite yet.

                                              Is there a way to view/send things through the IPC?

                                              At the time it was written, I had just started to play with that (if you see the presentation slides, that’s the fuzzing bit, the actual sending works very much like a clipboard paste operation), the features are in the IPC system now, not mapped into the sensors though.

                                              So it looks like the default for sense_mem is to not interrupt the process. I’m guessing the intended method is to use ECFS to snapshot the process and view later. But I’m actually trying to live view and edit a process.

                                              yeah, sense_mem was just getting the whole “what does it take to sample / observe process memory without poking it with ptrace etc. Those controls and some other techniques are intended to be bootstrapped via the whole ipc-system in the way I talked about earlier. That should kill the privilege problem as well.

                                              Didn’t quite understand this example. Mem diff seems interesting in general.

                                              The context menu for a data window should have a refresh clock option. If that’s activated, it will re-sample the current page and mark which bytes changed. Then the UI/shader for alpha window should show which bytes those are.

                                              For example, I have a program that changes a C variable’s value every second. Assuming we don’t go read the ELF header, how can senseye be used to find where that’s happening?

                                              The intended workflow was something like “dig around in memory, look at projections or use the other searching tools to find data of interest” -> attach translators -> get symbolic /metadata overview.

                                              and the distinct pattern in the point cloud hints that we are dealing with some ASCII text. This could use some more explanation. How can you tell its ASCII from just a point cloud??

                                              See the linked videos on “voyage of the reverse” and the recon 2014 video of “cantor dust”, i.e. a feedback loop of projections + training + experimentation. The translators was the tool intended to make the latter stage easier.

                                          2. 1

                                            I’d give it about two weeks before running senseye as a Durden extension is in a usable shape (with most, but not all features from the original demos).

                                            A CMake FYI - normally you can patch the CMakeCache.txt and just make. Weird that it doesn’t find the header though, src/platform/cmake/FindGBMKMS.cmake quite explicitly looks there, hmm…

                                            The old videos represent the state where senseye could run standalone and did its own window management. For running senseye in the state it was before I started breaking/refactoring things the setup is a bit different and you won’t need durden at all. Just tested this on OSX:

                                            1. Revert to an old arcan build ( 0.5.2 tag) and senseye to the tag in the readme.
                                            2. Build arcan with -DVIDEO_PLATFORM=sdl (so you can run inside your normal desktop) and -DNO_FSRV=On so the recent ffmpeg breakage doesn’t hit (the AV_CODEC stuff).
                                            3. Build the senseye senses like normal, then arcan /path/to/senseye/senseye

                                            Think I’ve found the scripting error, testing when I’m back home - thanks.

                                            The default behavior on scripting error is to shutdown forcibly even if it could recover - in order to preserve state in the log output, the -b arguments lets you set a new app (or the same one) to switch and migrate any living clients to, arcan -b /path/to/durden /path/to/durden would recover “to itself”, surprisingly enough, this can be so fast that you don’t notice it has happened :-)

                        3. 3

                          If you are looking for references on debuggers then the book How Debuggers Work may be helpful.

                        4. 5

                          work: In the process of ‘packing up’ because I am going to leave after 1,5 years as a systems engineer. Haven’t decided on a new job yet and going to enjoy a bit unemployment in summer during june :)

                          fun: Trying to get up to speed with rust, because in my niche other languages with good type systems like haskell aren’t popular enough. Go is really gaining traction here, but I don’t enjoy it quite as much.

                          The project itself is a server implementation of the music playing daemon protocol which just forwards commands to mpv

                          1. 2

                            Good luck on next move!

                          2. 4

                            I’m thinking I’ll make a start on some kind of federated reddit-a-like

                            1. 3

                              Debugging several rare bugs of open source cache server nuster reported by community

                              1. 3

                                Continuing to work on several video courses for Packt Publishing, as well as trying to get the 0.3 release of my cross-platform, minimal UI toolkit (based on libui) out the door.

                                1. 3

                                  For $CLIENT:

                                  • As of this past weekend, all environment’s DB’s are completely migrated off of RDS 🎉, so I’m back to more ‘regular’ work on that front - mostly bug fixes, some planning for further HA improvements.

                                  For $COMPANY:

                                  • I didn’t quite get to an RC1 for Mallard Last Week but it’s very close now, so that should land, this week, if you’ll pardon the pun.

                                  For $HOME:

                                  • We’ve accepted that half the fish-pond is too shallow to sustain fish that are big enough to not be eaten by frogs/tadpoles, and it’a a PITA to maintain to boot. So now I need to ‘remove’ half a fish pond without destroying the other half, that still has fish in it.
                                  1. 3

                                    Continuing to look for a new gig. If you happen to need a full-stack dev who has a lot of experience (especially with client-side / JS over the years) and are remote or in Richmond VA, let me know ;) (https://www.linkedin.com/in/nickjurista/)

                                    At work, I’m going to be working to complete a content migration. Mostly aligning a ton of teams on testing their own content so we can get this out to production sooner than later.

                                    1. 3

                                      Working on integrating mining into our new Merit Wallet Electron App. I built a new library to mine Merit and a node module to integrate it into our electron app. This isn’t my first time integrating C++ into a scripting language, however, it is my first time integrating with node.

                                      Does anyone know the best way to package the dynamic libraries with the electron App? It isn’t clear where to put them in the docs.

                                      1. 3

                                        Continuing tinkering on my filesystem driver. It’s looking more and more like I’m going to have to copy generic_file_read/write_iter and mpage_read/writepages wholesale and modify them to correctly deal with file data not being block-aligned. I’m not looking forward to this, as it means keeping track of changes to these functions upstream and porting them back over.

                                        1. 3

                                          Last week I put together the bare bones of page/feed that aggregates information about developer competitions/challenges somewhat related with “dApp” development and blockchain. Link

                                          If any of you know more events that aren’t listed or related sources of information, would appreciate.

                                          This week I should make a few improvements to this page on my free time, will also do a bit of work on some open-source projects I contribute to. So it seems it’s “maintenance week” :P

                                          1. 3

                                            Trying to stay sane while working through a massive merge at work. The things I have seen…

                                            1. 2

                                              Just back from a holiday and I have a couple of weeks of C++ classes to catch up on. If I have time, I’d also like to stick my nose into this year’s BSides London challenges, as they’re always fun.

                                              At work I’ll be further building out our Kubernetes install: I’m also looking into getting involved in the documentation SIG so I can hopefully give a little back to the community.

                                              1. 2

                                                Not really something I’m working on but I am excited to attend GopherCon Iceland. So, if anybody wants to meet please leave me a message!

                                                1. 2

                                                  I’ve grown tired of having to wait for a long command in a deep SSH (==SSH in SSH in SSH) session, so I’m writing a webservice that helps me solve this:

                                                  ./longcmd && termnotify lkurusa-mbp "Done" will send a notification to my computer named lkurusa-mbp even if executing on a very far machine.

                                                  I chose Elixir and the Phoenix framework for this task.

                                                  1. 2

                                                    How does this command call your notebook? Does your notebook poll something, are you exposing an internet-reachable service or something else entirely? And is there some kind of authentication?

                                                    1. 2

                                                      Yes, it’s a publicly exposed service. I plan to make it available for everyone For As Long As Possible (tm). The client software on all the machines keep polling (once every second) their respective UUID endpoints: /check/:uuid.

                                                      It’s easy to add a new machine, just GET /create and it sends you back a UUID, then on the client machine you can:

                                                      nohup termnotify listen $uuid

                                                      To send a notification to it:

                                                      termnotify new-machine "my-new-fancy-machine" $uuid && termnotify my-new-fancy-machine "hello there"

                                                      There is no form of authentication for now, as so long as the UUID is kept secret it’s going to be hard to “hack”. Additionally, it’s easy to generate a new UUID.

                                                      The client side will use the respective notification sending API of the platform it’s running on, for now I only support Linux with notify-send.

                                                  2. 2

                                                    I’m going to be adding location services to power a map view on a mobile app built on React Native, and building out the corresponding backend support using PostGIS (which I haven’t used before, so it’ll be a nice learning experience).

                                                    In my personal projects I’m hoping to find time to make some progress on a website for a charity my parents are involved with. I’m using GatsbyJS and need to find a way to make it easy for non-technical people to edit, so I’m currently looking at Netlify CMS and Contentful as possible solutions.

                                                    1. 2

                                                      Starting a text classifier and named entity recognition

                                                      1. 2

                                                        What tech stack?

                                                        I’ve used StanfordNLP’s NER on a project previously (we literally just needed NER and some date recognition, no sentiment/etc) and while we got it to work, the amount of work required to get it to a usable stage felt like overkill - it didn’t help that I had to delve back into java to get a usable http interface for it.

                                                        1. 2

                                                          If your looking for something better and non-java (with a more permissive license) I recommend checking out spaCy - https://spacy.io

                                                          API is a pleasure to work with and lots of really good NER comes with the pretrained models.

                                                          1. 1

                                                            It was more of a general curiosity than a current requirement, but thanks for the reference.

                                                          2. 1

                                                            I am doing NER with spacy, classification with tensor flow. I am also experimenting with prodi.gy a tool that is developed by the same guys than spacy and offer an easy interface to worwith. For now I still have some issues with my own word vectors (4M words) I have some buffer overflows that I do not yet understand.

                                                        2. 2
                                                          • Monitoring ESCI Online service for campus.
                                                          • Setup Gitlab CI runners.
                                                          • Implement simple ESCI “at a glance” dashboard, hopefully using PHP Razorflow dashboard libraray.
                                                          • Debug pre-live issues on American Presidency Project
                                                          1. 2

                                                            Wrapping two legacy embedded system projects into virtual machine images for future preservation.