1. 18

What are you doing this week? Feel free to share!

Keep in mind it’s OK to do nothing at all, too.

    1. 18

      Last weekend I went outside at night to capture photos of M31/Andromeda. After taking 102 photos and stacking them together (with a bunch of flats, bias and dark frames), I ended up with this: https://twitter.com/YorickPeterse/status/1165614942218805248.

      This week there are two things I will be working on:

      1. For Inko I really need to get some work done on porting the parser to Inko itself. I don’t really like writing parsers, so I have been slacking off a bit.
      2. Figuring out what exactly I would need to do to obtain higher quality images of Andromeda. I probably won’t go out again this weekend as I have other activities planned, but I want to at least be prepared for the next time.
      1. 1

        Wow! That’s frickin’ awesome! Love to hear more about this as you go along.

        1. 2

          In this case I think it took a total of three hours, including setting things up. I was initially going for a total exposure time of around 1 hour. Sadly, for some reason my camera refuses to take more than 10-or-so shots when using interval shooting, even when telling it to shoot 100 photos. This meant a lot of back and forth between my chair and the camera, accidentally messing up the focus in the process, etc.

          The post processing took around three hours, most of which was spent reading articles about how to do X, Y, Z in GIMP, as most tutorials assume you are using Photoshop or other software not (properly) available on Linux. The process of stacking photos is largely automated using this set of scripts, followed by some manual tweaking of colors, sharpness, etc.

          1. 2

            The last time I made an attempt, I bought an app that allowed me to take shots from my tablet. That was a big time-saver.

            I haven’t done the stacking yet. So many cool things to do, so little time, you know?

      2. 1

        What’s your astrophotography setup look like? (I see the postprocess comment to Daniel below, but what kind of camera/scope/etc do you have?)

        I recently got a pretty nice Dobs (big upgrade from my walmart special) and have been thinking about trying some astrophotography (I’m in a short drive to a lot of pretty dark areas from which to shoot). I figure on starting with just a phone mount, if only because the space is so big and so full of rabbit trails to chase I haven’t been able to get a foothold on what I actually need. :D

        1. 1

          I am using the following setup:

          • Scope: William Optics Zenithstar 61 + WO Field Flat61 field flattener
          • Mount: Star Adventurer Pro
          • Tripod: some 10-ish year old (but still decent) Vanguard tripod, without the pan/tilt head
          • Camera: Nikon D700, unmodified
          • Binoculars: Nikon A211 10x50, mostly used for plotting a course for my telescope as I don’t have a goto mount

          The total kit (including tripod and counterweight) weights around 5 KG, so it’s quite portable. This is important as I do not own a car. The cost (excluding camera and tripod) was around €1200, which for telephotography is quite affordable.

      3. 1

        Nice work! My two telescopes are both dobs (that I built a while back), and I experimented briefly with astrophotography by taking high resolution videos of objects as they moves across the FOV in the scope (since the dob mounts are alt/az and don’t track). It’s OK for planets and other bright objects, but I have never tried capturing something as faint as Andromeda.. I may have to give that a try soon (assuming I remember how to do all that, it was a fairly involved process as you allude to..)

    2. 9

      #NESdev dev diary: over the weekend I made some nice initial progress on my NES tile editor, which I’ve decided to name “tilerswift”:

      https://mathstodon.xyz/@JordiGH/102677864715096011

      http://inversethought.com/hg/tilerswift/file/tip/tilerswift

      Qt is kind of nice! And without having to use it via C++ it’s even nicer.

      But now I’m kind of stuck, since my current approach has made it slower and unworthy of the name I christened it with. Someone in IRC suggested I rethink the layout with QGraphicsGridLayout. I don’t know if that will help. It appears that having the individual ~16k tiles of a single ROM as raster images and updating them all so I can change the palette isn’t the best idea. If there are any Qt experts out there, I would love your help.


      On a different note, at work I’m facing a different problem. I need to figure how to make Postgres do a query over a M2M relation. I’m trying to find all activities that have specific kinds of factors. Each side of the relation isn’t that big,

      jordi=> select count(*) from emissions_activity;
       count 
      -------
       68591
      (1 row)
      
      jordi=> select count(*) from emissions_factor;
       count  
      --------
       178035
      (1 row)
      

      but the through table is kind of huge:

      jordi=> select count(*) from emissions_activity_factors;
        count   
      ----------
       68600443
      (1 row)
      

      and Postgres on my laptop has a helluva time doing a simple SELECT across the through table. Our servers in AWS aren’t that much more powerful.

      I only need to do this query across the M2M a couple of times during a data migration, so I can accept some slowness, but I still am curious to know if there’s some magic I could do to improve the situation.

      1. 2

        A long time ago I had a similar issue, I ended up building a bunch of materialized views that cached out parts of the query that were relatively easy to produce once and then keep up to date w/ triggers during the move. I basically had a situation where we full joined an M2M into one big result that was roughly similar in size (I think it was in the 50m row range), filtered off all the intermediate tables, and then just kept them up-to-date w/ triggers till the migration was complete. It was a nightmare to write, but honestly everything was (that’s why we were migrating), but we were porting from MySQL -> PG so it was easy to keep all the streams separate.

        The other idea I had was just to rent a really big ass server with flames on the sides, but we didn’t really have it in the budget for the amount of time we’d need and we weren’t sure it would work.

      2. 2

        A table scan over 68 million rows is going to take a little bit of time.

        If you have an index on (emissions_factor_id, emissions_activity_id) (in that order), the following should be much quicker (assuming you are working with a small enough subset of the factors):

        select * 
        from emissions_activities
        where id in (
            select emissions_activity_id 
            from emissions_activity_factors
            where emissions_factor_id in (
                select id 
                from emissions_factors 
                where <conditions snipped>
            )
        )
        
    3. 7

      Giving PostgREST a try. Need to provide a simple JSON API from a Postgres-backed system, we’ll see how this goes.

    4. 7

      This is a long update…

      I’ve managed to make at least a little progress on the Z80 every day. All my notes (including TODOs, a log of what I’ve been learning, parts inventories, design notes, links, etc) is in a single org file in a repo with datasheets, schematics, firmware, etc. which doesn’t translate well to a blog format, though, and publishing that org file as an HTML file (which I do, for my own use) is pretty ugly; I guess I could make it look nice, but that’s time I could be spending working on building a computer so it’s not going to happen.

      The major updates since last week:

      • I’ve tried quitting, but I’m still using Eagle. Every time I use KiCad, I last about 10 minutes before rage quitting and just going to Eagle. A large part of this is due to the fact that I’m pretty proficient at using Eagle, including adding parts where they don’t exist.

      • I’ve decided on a Eurocard format for I/O and mainboard; the power board is relatively simple and thus small. The two boards will use an IDC40 interconnect, though; this would also give me the option of stacking boards if I felt so inclined. Doing it this way lets me work around the some of the limitations in Eagle that were holding me back.

      • I found the MCP23008 and MCP23017 - they’re 8- and 16-bit I2C I/O expanders, which will make life a lot easier, I think.

      • I found a display module with a 5.7” 320x240 LCD. It has an LCD controller (an RA8835) that I’ll probably drive with an I/O expander.

      • I came up with a first pass at an ATmega328-powered serial I/O module, but then decided to start designing something around the ATmega1284 to handle I/O in general. This would be my version of their PIO; at first, I didn’t like this idea and wanted to do everything with the Z80, and then realized this is basically how everyone who makes computers does it. It’s not incompatible with my goal of a system that I can more or less completely understand, so I’m going forward with it.

      • Instead of coming up with some crazy dual clock system to support single stepping, I realised I could just use an Arduino and a push button; I also switched to an oscillator to save space and avoid having to add the extra bulk of the crystal + resistors + 7400.

      • There was a lot of physical layout prototyping,

        mostly by way of paper cutouts in the shapes of the dev boards and LCDs. I’ve discovered that my paper notebook that I’m tracking all of this in is pretty close to the perfect size: 25x19cm (9.8x7.5in). The biggest unknown right now is the keyboard (more on that later). This also led to me messing around with FreeCAD (I think I can get by with 3D printing the case) and realising I have no clue what I’m doing. I did a little bit of SolidWorks in university as part of a class, but that was over a decade ago. I’m particularly not looking forward to figuring out the hinge.

      • I did a lot of thinking about the keyboard; I could build my own using a 16-bit I/O expander and an 8-bit I/O expander. While looking around on eBay for keyboard switches, I found a replacement IBM 5140 keyboard that I’m going to see if I can get to fit.

      • I realised that the SD card is going to require a 3.3V power rail, so I added an appropriate regulator to the power supply. Right now, it’s an LM1117, but I’m looking into whether I can get by with an MCP-1700 regulator. It’s smaller, it should produce less heat, but it can only drive 250mA. I think this should be enough, considering I’m only using it for signaling and not driving anything that needs a ton of current, but I’d like to be sure. I have a few ideas about doing full-disk encryption using some Atmel secure memory chips (I’ve used the AT88SC in the past, but I’m sure there’s newer/better options); another idea is add some basic cryptographic primitives to the I/O firmware, e.g. as an I/O device. That’s all pretty far down the road.

      • In a memory layout writeup, I’d mentioned that the outputs need to be buffered. I added in some 74545 line drivers for the address pins (a pair on the mainboard and a pair on the I/O board) and a 74245 bus transceiver for the data pins (again, on both boards); it looks like it works out such that tying the direction pin on the transceiver to the Z80’s works as intended. I’m still waiting on parts to arrive, so I haven’t been able to physically verify this.

      • I finished a preliminary schematic (sheet 1, sheet 2) and board layout for the main board; this has the Z80, associated support hardware, and the memory. I’ve got a checklist of things to verify before I have it fabricated as the boards will cost a decent chunk of money and I’d like to get it right before sending them off to be fabricated. This is also several orders of magnitude more complex than any board I’ve previously done, so I did the layout using the autorouter; eventually, I’d like to hand route it but that’s going to take an incredible amount of time to do. I’m also not happy with the large number of vias; I can’t do 4-layer boards in my EDA software, which would help (while doubling the cost of the board). I don’t have experience designing a board this complex and I don’t really have a community of hardware engineers I can ask, so I don’t really have a good intuition for how this will affect the system. I’m trying to read up a lot on the subject to make up for that.

      • I’ve also been thinking about moving the clock to the power board to support using an external clock for debugging. The power board is pretty bare right now, so it’d simplify things somewhat. Again, I’m not sure of the implications or impact of doing this.

      • I was supposed to have a book on programming in Z80 assembly show up this week but it appears to have been lost in the mail. I have some others that are coming when they get here.

      • I bought a logic probe (~$20) to help debugging, and I’m unreasonably excited to use it.

      So the project is coming along; I’ll get some dev boards this week to verify specific subsystems and I’m going to keep working on designing the I/O board. My EEPROM programmer Arduino shield should arrive from OSHPark late this week or early next week, and there’s firmware to be written for that in the meantime.

      I also bought an RC2014 that should show up sometime this week and I can start tinkering with som Z80 code.

      1. 2

        Awesome project. I’d say more but it’s late here and I’ve a QFT exam in the morning.

        On the software side of things are you planning on writing everything yourself? If not, you could try porting CP/Mish an “open source sort-of-CP/M distribution”.

        1. 1

          The plan so far is to write as much as I can myself, mostly because it’s an interesting problem and is something I’ve always wanted to do. I have the RC2014 to experiment with CP/M; since I’ve got the ability to select 8K banks of ROM at boot time, I might try porting it over CP/Mish at some point.

          Good luck on your exam!

    5. 6

      I’m just doing stuff with terraform. I think I figured out a way around something annoying, but it’s horrible. I’m effectively trying to set 10 log metric alert filters, but all of them are nearly identical. My solution is going to be making a terraform module for an individual rule and then another one for the set of 10, then I’ll include that ruleset module in the resulting change. It’s gonna require three pull requests and two releases of the modules repo. Yay.

      Terraform is suffering as a service.

      1. 1

        Are you using Terraform 0.12? I tried making fine-grained modules in Terraform 0.11 at one point and it was tortuous, but it looks like a lot of the pain points have gotten much better in 0.12, particularly in that you can now pass more complex data structures to and from modules.

        1. 1

          I wish.

      2. 1

        Have you considered using a different language to output JSON to feed into Terraform? That’s what we did at my last job (with Nix as the language, but only because we were already using it for everything else).

    6. 5

      I have decided to learn Haskell. I have made a first attempt at it many years ago, and learnt a little bit, but I parked it for sometime while I was exploring other languages. C++, CL, OCaml, Clojure etc.

      Any suggestions as to how I may go about doing that, are most welcome.

      1. 2

        I found this book a while back, which had a bit more of an independent approach, as compared to most other books or articles I read, which seemed more fit for a programmer who already had experience with computational thinking, but not necessarily a functional approach. Seeing your background though, I don’t even think that would be too much of a problem. There are PDFs of the book floating around on the internet somewhere, if you don’t feed like buying it, so I have heard.

        1. 1

          Hey @zge, thanks a lot for the suggestion! The book looks great. I remember stumbling upon this website some time ago. Definitely something I can consider getting.

          I blew the book budget a few days ago when I bought The Art of PostgreSQL, so I’ll probably exhaust the free resources first. I’ve read 5-6 chapters of LYAHFGG back in the day, so I’ll continue on from there and maybe Real World Haskell.

          1. 2

            “Real World Haskell” was good too, but I personally found the practical example chapters too boring to read – skipping them necessarily took a toll on what I could learn, but whatever ^^ Otherwise certainly worth your time.

      2. 2

        I learned a lot from working through most of the https://github.com/data61/fp-course/, and I found the #haskell IRC channel more helpful than asking on e.g. StackOverflow.

        1. 1

          Thanks, checking it out!

    7. 5

      Holiday is over, back to work. We have a new machine with two RTX 5000s and plenty of memory/cores. Our project was running against the limits of our Tesla M60 (dual GPU) and Tesla K20c (an oldie, still good for training feed-forward networks or lighter RNNs). So, there is ample capacity again for new experiments. We also have two new Radeon VIIs, but these are still being set up.

      I have a local Hydra instance set up for automatically testing models using some Nix on git commits, but I still have to deploy that properly on a work server (which requires some hoop jumping due to CentOS 7, storage via NFS, etc.).

      Also writing up some recent work.

      1. 1

        any more info on the lab setup or development process? Sounds like you are upto some interesting model creation?

        1. 2

          any more info on the lab setup or development process?

          It is mostly quite boring. We use GitHub, do PRs + reviews for everything Travis-CI for CI. Some projects are built using Nix in CI because it is easier to pull in all the dependencies (libtensorflow, Python Tensorflow module, various Rust versions, etc.). The team consists of more senior staff, PhD students, and students. I am very happy that we have been able to grow a team that it both strong scientifically, but also has a good Rust and Tensorflow background. Some public projects are e.g. our Rust packages for training and using embeddings:

          and a neural sequence labeler:

          https://github.com/danieldk/sticker/

          Sounds like you are upto some interesting model creation?

          Some recent work:

          And a lot more stuff cooking ;).

          1. 1

            Excellent!

    8. 4

      Travelling to North Carolina for a conference and user meeting for the Android software we develop a plugin for.

      Outside of that I’d like to get some reading done (brought “Implementations of Prolog”) and explore the town of Pinehurst.

    9. 4

      I have a proposed metric called Cognitive Load that I’ve been trying to avoid writing about for about a year now. It’s computable, I believe it has a lot of value, but it’s going to end up being a heuristic, like the Drake Equation; a placeholder for more terms and variables as we learn more empirically.

      Why the delay? The usual response from the community (not lobsters, thought-leader wieners) is crickets and silence, plus while really cool, like the Drake Equation it’s just a bunch of numbers all thrown together. Drake was very useful in that it started us thinking about how vast and almost infinite life probably is. Similarly I think this is going be useful by starting us thinking about the vast and almost infinite amount of future work we’re creating when we code apps using various styles.

      It’s been long enough. Time to write it.

      1. 2

        Is it anything like Cognitive Load? If not, it might need a new name since that’s so well-established.

        1. 2

          Yes. That’s exactly what it is. Symbols I have to manipulate right now, symbols I have to remember, symbols that are part of the environment.

          I didn’t go back and tie it all together. I’m not a scientist and I figured the community would be much better at thrashing it out once people started discussing it.

          The general CL stuff is fine. Of course, it’s not meant for coding. So the strategy right now is to leave the name the same and then figure out the delta and decide. One of the reasons I delayed so long is that there are a lot moving pieces here, various sciences and experts who have all decided what these things are. Doing a mashup is always going to be a PITA. If I decided to call it something besides Cognitive Load, it would never get the attention it needs to continue development.

      2. 1

        I’d like to read it. Is the metric specific to code styles or more general?

        1. 1

          Thanks! It’s a language/tool/platform independent way of objectively measuring the overall work it takes to maintain your solution. Should work for anything that people code and maintain. I have straight terms being multiplied, a la Drake. I imagine the “real” answer will be somewhat different terms and curves, but it’s a start.

          If it’s okay, I’ll post here once I get a draft nailed.

    10. 4

      Messing with AWS API Gateway and Lambda

    11. 4

      Improve my personal security.

      I have changed all my passwords to diceware derived passwords, printed my private key and put it into a ziplock and then into a small lockable case, and created another pub/priv key pair that is signed by my original one, buried underground if I ever have the original stolen (along with its revocation key).

      All my passwords are stored in individual pgp encrypted files - via pass. I was about to re-invent this before someone told me about it, which are then backed up on remote sources.

      I want to buy another Trezor and create custom firmware so it’s a specialized pgp device (all signing happens on-device).

      I hope to continue to improve my personal pantry tracker. It’s a 1 file system - and that’s database, ui, and backend logic all included. I’m hoping it’ll inspire some people to create similar services.

      1. 1

        I hope to continue to improve my personal pantry tracker. It’s a 1 file system - and that’s database, ui, and backend logic all included. I’m hoping it’ll inspire some people to create similar services.

        This sounds interesting, can you elaborate on what it does and how?

        1. 1

          You will see more as the weeks come. :)

    12. 4

      Leaving town for a short trip to the Baltic Sea. Fortunately traveling by train which limits me to leave my laptop at home and staying at a valley which is affected by the laziness of German mobile network operators, thus having having no stable mobile data to use the Internet anyways. Get some rest and start relaxed into the later half of 2019.

    13. 4

      My second attempt at a live migration of a nontrivial-sized PostgreSQL table. I’m not a DBA by trade, just a programmer who knows enough about databases to be dangerous.

      The database is an RDS instance and has two read-only replicas that are on less-powerful instance classes than the master database. Under normal circumstances, that works out perfectly; the replicas are plenty fast enough to support their respective query loads, and we don’t waste money on excess capacity.

      But max out the master with sustained write load, and all of a sudden it is less perfect. The replicas fail to keep up with the master. Replica lag climbs until streaming replication stops and it falls back on copying WAL files around. At some point, the replicas run out of burst I/O credits and bog down even further. The master starts blocking on WAL locks and the whole system grinds to a halt until the migration is killed.

      PostgreSQL supports synchronous replication, but as far as I can tell it’s an all-or-nothing affair. So the workaround I’m using this week is to simulate it at the application level. The migration code processes a few thousand rows, commits the transaction to the master, then queries the replicas every 50ms until it sees the new data it just wrote. Then it proceeds with the next set of rows. The idea is that this will dynamically throttle the migration such that it runs only as fast as the replicas can handle, even if the master could finish it many times faster.

      It mostly worked well in my test environment with a clone of the production data, but after processing a couple million rows, the query planner would abruptly decide to stop using one of the indexes, and suddenly the incremental chunks in the migration would start taking 20 minutes each instead of half a second. Manually running VACUUM ANALYZE on the table made it start using the index again, but a few million rows later it would go back to doing full scans. So in addition to the thing I described above, I have a little job that runs VACUUM ANALYZE on the table once an hour.

      This has been running for the better part of a full day without any alerts going off, and replica lag is holding steady at only a slightly higher-than-normal value aside from spikes when the vacuums happen. I won’t say this is my proudest work, and I’d like to understand why the database is deciding indexes are no good any more (which I think shouldn’t be possible given what the queries are doing), but it’s getting the job done so far.

      1. 3

        The index “goes bad” when the query planner estimates that its data is unlikely to be helpful, which it does when x% of the index points to now-deleted rows. The value of x depends on the best alternative.

        1. 1

          Do you happen to know if that can be triggered by insertion as well as deletion/updating? In this case the migration is copying rows to a brand-new table with an INSERT WHERE NOT EXISTS and the condition uses an equality comparison on a column with a unique index. No rows are ever deleted or updated, just inserted.

          I believe PostgreSQL implements updates by effectively deleting and re-inserting the row, but I didn’t know there was anything similar at play with insertion.

          My previous guess was that it was something to do with blocks in the B-tree filling up and the database being unable to reorganize the tree fast enough to make room. But like I said, I’m not an expert in this stuff.

          1. 1

            I think there are at least two ways in which a row insertion can cause an index update. For example, if the index is a b-tree (a common case) then inserting new leaves requires updating interior nodes.

            But I think you’re more interested in how to perform the migration than in a deep dive into the reasons for the problems? There are two typical ways forward:

            1. Use fewer and bigger statements. Don’t insert one row per INSERT statement, insert 500 or 5000 at a time. That’s what I did.

            2. Disable the indices while the affected tables are in a mostly-writing state, and reenable it when usage swings back to mostly-read.

            Either will work. We generally chose approach 1 because we had to do the writing while the database was doing its ordinary read work.

            In a couple of cases I chose a hybrid, by creating an unindexed temptable in a transaction, adding rows to it in an arbitrarily complex way, and finally copying the entire temptable into the target in one go. begin; create temporary table ..; …; insert into … select * from temptable; drop table temptable; commit;

    14. 3
      • Attempting to rebuild my KVM zone into a Bhyve zone, now I’ve noticed SmartOS will run both concurrently and Bhyve seems properly supported now.
      • Rebuilding my HASS install, as it’s been broken for a couple of weeks (newer python version required) and I’m missing being able to turn my office desk on/off from my wrist (and also having push notifications for my doorbell.)
      • Building lots of new infrastructure for $work, as we migrate all our “clusters” to a standard setup (all powered by terraform/puppet. 🎉)
    15. 3

      $work: Piles of stuff, new products going to POC, paperwork from last week, it’s busy busy for me. Good busy though.

      !$work: Gonna probably get back to turning again now that the weather isn’t 99% humidity and 100% awful. I picked up some Redheart to make some seamrippers out of and see how it turns, I’ve got an idea for a laminated bowl I’d like to make with it too. Trying to write a bit more of my next RPG campaign as well. I’m aiming for a more structured experience and am having some writer’s block on it. Usually that means I just need to step away from things for a while but I’m stubborn and won’t re-learn that lesson for at least a few more days.

    16. 3

      I’ll spend one day at the Mozilla office in Portland, writing up stuff I heard and wanted to act upon on RustConf.

      Tomorrow, I’ll fly to Victoria for a week of holidays.

    17. 3

      I’m recovering from seeing Tame Impala a few days ago. Wow. What a show. And what a great band. I feel like I got my full money’s worth: great show with even better visuals.

      This week I’m hoping to finally wrap up a long-winded project. I mentioned this project before, but I’m pretty sure THIS will be the week we’ll wrap things up. I have a good feeling about it. But, who knows. It might get pushed, yet again. A coworker is going on vacation soon, and we really need to finish it before she leaves.

      The rest of the week will hopefully be focused on that project, while getting acclimated to 1password, which I snagged a free account and recently moved all my passwords to it, and continue reading Dune.

    18. 3

      I set up a fork of Gwern’s site. https://www.shawwn.com/About

      I wanted a site where I can post writings, notes, math formulas, charts, code snippets with highlighting, and I didn’t want it to be Medium.

      This turns out to be a lot harder than expected. Gwern’s site seemed ideal, so I just asked him on twitter if he’d be fine with me forking his design. He said go right ahead, it’s CC-0 licensed. To my surprise, he also offered a lot of help in getting it set up. Haskell is rather difficult to tame.

      The result is that I now have a site with all of those features that I can edit from github, just like github pages. https://github.com/shawwn/wiki

      Changes show up within about 15 seconds or so. It takes a moment for the static site generator to run + sync to S3.

      As a parting piece of advice, Gwern recommended I tweak the look and feel so that it has a unique style, which I completely agree with. I don’t like the idea of spending several days on CSS – I’d rather be working on AI research – but it’s worth doing.

      And you can too! Part of the goal with this was to provide a “standard research wiki” to people who just want a website that looks decent and don’t want to write it from scratch.

    19. 3

      Finally getting around to releasing (after ~7 weeks of the branch sitting idle) the new scoreboard for my browser-based multiplayer game side project, https://alpha.sneakysnake.io !!!

      Other than a few bug fixes and some metric logging, the last major feature before trying to monetize will be player-entered player names! Any suggestions on how to blacklist bad-words? I would like the game to be family friendly.

    20. 3

      It’s my second annual Dev Week!

      I have trouble using all my use-or-lose PTO, so this is the second year I take a week off of work and do whatever the heck I want to. Last year was a bunch of Vulkan stuff.

    21. 2

      Job hunting and working on a pet project of mine, (ask.engineering website not up), which aims to co-locate feedback collection with source code and other relevant contexts. As a former member of a developer productivity team, I know first hand how important developer feedback is and how difficult it can be to get. This project aims to solve the developer feedback problem by prioritizing collection context.

      For example, lets say you want to collect freeform feedback from your platform team about pain-points in their workflow. You would create the question in the app, assign it to the github team of your choice, set a participation threshold, and the app will automatically post the question on the PRs and record the feedback which is subsequently exposed in your dashboard.

      You can take a look at the repo here: https://github.com/dan-compton/world/tree/master/platform Most of my time right now is spent trying to wrangle rules_nodejs.

      HIre me!

    22. 2

      I’ve been playing a lot of The Division 2 too lately and now it’s been a bit too bad. I need to step away to not get too dependent like I was to Wow. So I decided to learn Unity with a Youtube Series that create an RTS with it. It’s a bit too slow for my taste, but we’ll see how it goes.

      Good things is, now I understand why I love gaming too much. It give me a sense of accomplishment that I don’t really get elsewhere. So I’m trying to emulate the feeling with the Youtube series since each “episode” can be seen as an accomplishment!

      1. 3

        It give me a sense of accomplishment that I don’t really get elsewhere.

        As someone who plays games too, this is definitely a dangerous trap. Better to get a feeling of accomplishment via real world accomplishments…

        1. 1

          Yeah I know. I was playing Wow like a madman so I had to stop for plenty of reasons. So now, I’m working on finding a great way of enjoying gaming without need that feeling. I don’t quite know how to yet.

    23. [Comment removed by author]