1. 26

Throughout your programming career what’s the most coolest technology you have worked on ? Such as building large scale web applications, High frequency trading systems and etc. How did it help you progress in your career

  1. 30

    The coolest thing I worked on, was also the first thing I worked on. It was 2008 and Swedish Ericsson and Chinese Huawei was neck to neck, trying to be the first to release LTE, 4G. I was working at Ericsson, building a 4G simulator. That project in itself might not sound cool to others, but it was to me. As the competition was so fierce, we as mere test-tools developers, at that point knew more about the LTE/4G standards, tools, and network, than the people actually developing the network nodes. So, they called upon us instead to help push the front-line of the competition. Both companies desperately wanted to be the first ones to present 4G. All of a sudden, the test-tools team was cool.

    I strongly believe I, together with a colleague of mine, was the first person in the world to get a paging through in 4G/LTE. Paging basically being the network broadcasting out, trying to find your phone.

    Not sure why that stands out as exiting to me. Maybe it was just because of the fact that it was the first job I had after university.

    1. 3

      One of the coolest things I worked on was a parser for Ericsson’s eNodeB LTE data stream, which also involved writing a custom database that let us query and stream the data for a GIS application. Back then there weren’t really any good off the shelf DB options available, so I wrote everything (mostly) from scratch in C++.

    2. 21

      In terms of sheer cool factor: I’ve worked on a pretty awesome RTOS, and on my first full-time job no less. We fit pre-emptive multitasking (with a primitive, but useful priority scheme), a communication stack that supported various protocols, various drivers for all sorts of peripherals (written with a pretty nice driver API) in a handful of kilobytes. It absolutely helped me progress in my career – it taught me the basics of everything I’ve done ever since. I’d always dreamed of working on an operating system, and I was incredibly lucky to be able to do that right on my first “real” job. I’ve done low-level programming ever since, including – but not just – OS development.

      It was also a great experience job-wise. I was fresh out of school, with years of freelance IT gigs on my belt but without the experience of having to ship something good on a fixed schedule, working with a bunch of other people. I had great mentors, and worked in a team of smart and inquisitive people. It was a small, startup team, so everyone did a bit of everything. We had limited money for testing, for example, so everyone lent a hand with testing, every week, by rotation. That place taught me a lot about how to make wonderful things with limited resources, and about the value of going beyond what’s in your job description.

      In terms of how well I sleep at night: I spent a few years writing firmware (and, now and then, lending a hand with designing hardware) for medical devices. This was absolutely the most rewarding part of my career and I very much wish I could do it again someday (unfortunately, even though I had the best colleagues, in terms of company culture, I was somewhat of a square peg in a round hole there, and it didn’t work out in the long run). It was technically challenging on top of being extremely useful, these were things that were literally used for brain surgery. Seeing how they were used in the OR was part of the job. I’ve watched countless procedures, either live or on video, and for two years I’ve read more medical papers than CS papers.

      I never treated it as “cool” because it was huge responsibility. You look at Grey’s Anatomy and you think “whoa, that’s cool” but when you actually do things that might hurt somebody, “cool” is the last thing you want to call it. We had an amazing testing team and I had no doubt that if there was anyone who could find an error in my firmware, it was these guys. But I still combed through every file I wrote and went through everything on my way home from work and double-checked and triple-checked anything I could.

      Career-wise, it’s been super helpful.

      First, interviews with non-tech people take a very interesting turn once you talk about these things. I’m reasonably adept at conversation so I didn’t have trouble with that before, either, but afterwards, it gave us a good discussion topic, way better than that awkward “tell me about yourself” and “where do you see yourself in five years” pop psychology crap.

      Second, that’s where I really learned how to review code and to take responsibility for what I do. I never knowingly shipped buggy code before, either, but I sure as hell shipped summarily-tested code that worked for a handful of things that were easy to test if it was Friday afternoon and I was tired and the thing I was working on was boring.

      Third, and perhaps the most important takeaway: it completely and forever changed the way I think about the relation between my programs and their users.

      I used to think that improving things is easy and obvious, you just have to follow some good design rules and make things simpler and friendlier. Boy was I wrong. We did sketches and diagrams (and wireframes, on things that had GUIs) and design meetings and hallway usability tests and came out with things that looked bloody amazing on paper. Nine out of ten of those things would turn out to be at best terrible in practice – and, at worst, dangerous. Five minutes in a room with a surgeon beat fifty hours of brainstorming and design meetings, and invalidated almost every single design decision we took by simply “following good design principles”. It also turned out that the first things we wanted to change in order to make things easier to use were the things that our users liked most. I completely lost faith in modern UX design practices after that.

      Like most people who got into programming for the sheer joy of it, I was also writing primarily for the sake of my work – it was important for the program to come out “right” above all other things. That changed significantly, as I faced the obvious realization that the value of a program is in how it’s used, and that I should be writing it for the sake of its users, not for its own sake. The people who use a piece of code are the reason why it exists in the first place – without them, my code had no value. Before that, I used to be pretty arrogant on this matter – lusers don’t know what they want, they’ll realize how useful my program is once they try to use it, everyone is stuck in their old ways but we can’t have innovation if we don’t break workflows etc.. Once I realized that was a load of self-serving bullshit, the rate at which I produced useful code increased dramatically.

      1. 4

        It also turned out that the first things we wanted to change in order to make things easier to use were the things that our users liked most.

        Can you please elaborate a little more? I personally think the infinite discoverability promoted in early Apple work is important and we should go back to that, but I would rather hear from someone who was in the trenches, as it were.

        1. 12

          Unfortunately I can’t really give too many juicy examples because, while this was a while ago (5+ years already, wow…), the devices are still sold. These things are pretty long lived (sold & manufactured for 10-15 years, supported for 15-20 years, sometimes more). But I think I can elaborate on the context a bit.

          First, as a general remark: there’s a lot of careful UX and HCI work that goes, or should go, into medical devices. Except a lot of it is done the “old-fashioned” way and often not labeled as such – it’s seen as a “normal” part of engineering. Prior to joining that team, though, I’d worked on a bunch of consumer devices, and there was some hope that we could apply some more “modern” principles to get devices that were friendlier and more approachable – and being the one with experience doing approachable gizmos, I picked the short straw. I ended up throwing away almost everything I’d learned before that.

          Nowadays, I’m suspicious of anything that claims a certain level of efficiency (or greater efficiency than a previous version), improved intuitiveness, ease of use, or discoverability, without showing numbers. These things aren’t easy to quantify, yes, but that also means it’s very easy to figure out how to interpret observations in order to fit the conclusion you want to draw. I am particularly suspicious of anything that uses self-reported assessments (e.g. “volunteers reported that they had difficulty figuring out how to do X” – that’s for another post).

          I personally think the infinite discoverability promoted in early Apple work is important and we should go back to that

          I love that kind of infinite discoverability, too, in all its incarnations (e.g. Amiga), but I think part of the reason why I love it is that I see it through the eyes of someone who loves computers. That’s not how most of my users (medics, nurses, technicians and so on) saw it, and I think it can be extrapolated to pretty much any kind of professional use. Discoverability is great for consumer applications, for things that are meant to be fun, but it doesn’t tell the complete story for professional applications.

          First of all – and it seems so obvious in retrospect… – it now seems to me that people who use something to do their jobs aren’t exactly thrilled by discovering things. The last thing you want to hear from a surgeon halfway through a procedure (or an accountant halfway through doing an audit, or a structural engineer doing a simulation…) is huh, okay, let’s see how you do X with this thing. Nobody wants to discover what clicking a button will do, they want to know what it does beforehand. If it’s not obvious what it does, normal, responsible people will check the manual. If that doesn’t say what it does, they just won’t press it. Medical devices are particularly sensible here, but it applies to anything to some degree. Nobody who has to earn a living will trust half a day’s work with a button that does who knows what.

          I was doubly ashamed by not figuring this out because in my alternate life as an electrical engineer I absolutely understood this. “Don’t press a button if you don’t know what it does” is the first thing they teach you when they let you near equipment that’s expensive and/or could kill you. “Don’t click a toolbar icon if you don’t know what it does” is the first thing you figure out on your own after you spend four hours drawing a schematic, you click “Align” thinking it will make everything prettier, and then spend another two hours putting it back together.

          It’s obviously important for things to be “intuitive” or “discoverable” – things that aren’t are usually also hard to remember. But it’s far more important for them to be clear, or predictable.

          For example, some of these gizmos could be plugged into PCs and used through an app. Some of the software I wrote was in the early days of the Windows 8 and mobile app enthusiasm, and it became fashionable to “redesign” various apps so that they look more modern. Everyone was doing it, and we were a bit frustrated to be late to the party, and I’d already started to play with WPF & friends. And as it began to pick up steam, people we talked to started to routinely ask us, in every imaginable way, to not remove text labels, from anything, not from physical buttons on the device, not from the buttons in the PC software, not from the menus, not from nothing.

          I realized why the first time I saw a device acting funny in an OR. It didn’t malfunction, the doctor just thought it had a strange whirring noise, and he asked one of the techs to help him go through the readings on the device and make sure it’s all right. All this had to be done over the intercom – the good doctor didn’t want anyone coming in the OR unless it was strictly necessary. It’s a lot easier to say “press the “Diagnostic” button” than “Press the, uh, button with a circle on it and, like, a line in the middle, looks kind of like a clock, I think it’s the second one on the right?”

          Second: precisely because of this reason, everyone regards time spent learning how to use something as unproductive. People will put up with it once, but no more. It’s okay, to some degree, if it’s hard (as long as how hard is it is correlated with how important it is). What’s not okay is if you have to do it more than once for no obvious gain – “obvious gain” meaning “something that was not possible before”.

          For example, we spent quite some time figuring out how to do various setup procedures in fewer steps. People were, at best, indifferent to it. They’d done those procedures hundreds of times over several years. By now, they could do it blindfolded and with one arm tied behind their back. “Simplifying” them meant making things somewhat less troublesome for new users, while forcing seasoned users – aka paying customers who had been with us for two decades – to re-learn how to do something that had always worked fine. Most people were okay with it, but didn’t see the point. Lots of people were really pissed about it though.

          When we started working on a new version of a device and asked people what they wanted in it, the universal answer was “can you make it as close to the old one as possible?”. Everyone had things they didn’t like about the old one, but everyone also preferred putting up with them to the risk and difficulty of learning something radically different.

          It’s not that “users hate learning new things”. “Hates learning new things” is the last thing I could have said about any of the people using those machines – these were either technicians who were one screw-up away from killing someone, or nurses and doctors who went through medical school and absolutely learned new things every day, at a rate that made me feel stupid and slow. But it’s important for people to trust their tools and to be comfortable about their ability to use them correctly. If the tools keep shifting under your feet, you don’t feel secure with them. Also, every hour spent learning how to use something is an hour that’s not spent using it.

          Again, I want to kick myself for it because I knew every one of these things. Once I figured out how to do something with a CAD tool, the last thing I wanted to do when the next version popped up was figure out how to do exactly the same thing – with everything that entailed (ensuring that you get the right results, for example). A release that allowed me to do things I couldn’t do before was amazing. A release that didn’t allow me to do anything new, and worse, asked me to re-learn how to do things I could already do, was a flop. I could already do all those things and now I installed version 8.0 and for the next two days I can’t do them anymore.

          1. 10

            And I’m gonna piggyback on my previous post because there’s this thing:

            I am particularly suspicious of anything that uses self-reported assessments (e.g. “volunteers reported that they had difficulty figuring out how to do X” – that’s for another post).

            I have countless examples of how these things led to incorrect design decisions simply because no one examined the claim enough, and the obvious way to fix it was either a bad trade-off (it made other things more difficult) or outright incorrect. Basically I think it’s failure rate is in the 90%. What follows is just the juiciest example I can remember (not from the same period, but it certainly matched my earlier experience!).

            I once worked on, um, let’s say it was a Heissenberg compensator. It was this black brick of a thing with a probe sticking out, that you plugged into a really tiny jar and it told you how much uncertainty there was in it, and you could use a remote control with a slider to make it add more or less uncertainty. You could also plug it into a PC, and use it through an app, which had a big window displaying the current amount of uncertainty, and manipulate it through a “virtual” remote, although everyone mostly stuck to the real one. There was a display on the black brick showing that, too.

            When I started work on version 2 of the Heissenberg Compensator, I had an amazing treasure trove: about 5,000 (!!) case files, carefully compiled by our customer support team, with every support case, every formal talk, every rumour, basically anything that our techs or sales team had ever heard.

            Chief among these complaints was that “scrolling text is difficult to read”. Indeed, it was important enough that at some point in the past years, when it became clear a new version of the Heissenberg Compesantor was on the radar, someone did some sort of a hallway usability study, and they did confirm that it’s a pain to read scrolling text on the old display, yep.

            The Heissenberg Compensator Mk I had a pretty small screen, indeed, and so one of the things we did for the first prototype of the Mk II was get a bigger screen (within some limits – it was still pretty tiny), and we carefully made sure that every single string it displayed, in any one of the 20+ languages it supported, would fit on the screen without scrolling.

            Everyone we showed it to was unimpressed. The old screen, they said, was big enough, and they’d never seen it scrolling. Plus they mostly looked at the display on the PC screen anyway. I’d never seen it scrolling either, if we’re being fair.

            I spent three days poring over the case files trying to figure it out, and one of the techs, whose office was right next to mine, enlightened me purely by accident.

            The Mk I had a diagnostic and calibration mode. The very first line of the “Diagnostic and Calibration Mode” section of the manual said “Diagnostic and calibration should only be performed by authorized tech support personnel” so none of the scientists using it ever read further.

            You triggered the “Diagnostic and Calibration” mode by doing a sequence of button presses on the remote control. The sequence was not something that you’d normally do, you had to (among other things) press a button that was under a glass hatch, so that it wouldn’t be hit by mistake. Now comes the cool part.

            The Mk I, whose main brain was an old 8051, could only process so many serial events at once. There was a particular sequence that would trigger a wee bit of a latency in the remote, and the device seemed to freeze for about 10 seconds. Normally, that sequence happened only during the setup phase, which was done by a technician, but sometimes people forgot to plug a cable in, and they’d do it live. At that point, the screen would flicker for a bit (because of a flurry of IRQs), and then the device would seem to “freeze” a little.

            When that happened, people sometimes panicked and tried to press some buttons at random, just to see if it works. Sometimes they’d press them in rapid sequence even after it came back, just to see it’s not stuck anymore. And, yep, you guessed it, they’d enter “Diagnostic and Calibration Mode”.

            In this mode, a bunch of text scrolled pretty quickly. It had originally scrolled pretty slowly, but at some point, some support techs complained it was too slow. This, too, had a case file, and it definitely looked like speeding things up had been the correct thing to do! These people did this procedure dozens of times every day – they already knew what the text said, they just wanted to see the numbers. So the text was dutifully made to scroll faster. Of course, it was now so fast that anyone who didn’t already know what it said had no idea what was happening.

            It gets better: this mode had a timeout. At the end of the timeout period, the device would automatically reset. Also, the device was smart enough to save its runtime parameters periodically, so that if it was reset by a power glitch – or because the timeout had expired – it got back in the same state as before. Of course, no one could figure that out live: they had no idea what the scrolling text said, they thought it was an error message or something

            So actually the “scrolling text is hard to read” report hid three bugs, none of which were in any of the 5,000+ files, and which had been happening for about eight years at that time:

            1. The device has a high input latency under a particular scenario
            2. It’s too easy to enter diagnostic mode by mistake (!!), and there’s no way to tell you’ve done it
            3. When the device comes up after a reset, there’s no message informing you about it. That, in turned, took care of a bunch of other support cases: lots of people mysteriously reported that the device comes on with incorrect values for things like initial entropy levels. That happened because, instead of pressing “end procedure”, some scientists just yanked it out of the power socket. The device thought it had gone through a power spike – but, of course, it hadn’t – and dutifully restored its last state, instead of the “factory” state.

            This caused us to re-examine a lot of similar support cases. We found that, in almost every case, people ended up using the workaround we’d recommended during the first support calls, not the fix we pushed in a later version, because the fix we thought was dope missed the point almost every time.

            1. 1

              This comment and the other comment below form a really informative story that deserves to not dwell away in a lobste.rs comment. You should make a blog post from it or similar.

              1. 1

                Thanks! I actually had the second one in draft form at one point but I never got around to writing it. Thank God for lobste.rs – where even procrastination ends up being useful!

              2. 1

                Thank you! That will teach me to make a change for the sake of progress :)

                1. 2

                  Don’t be fooled by my detailed reply, sometimes I’m not convinced I really learned my lesson, either :-).

          2. 12

            Firefox OS was pretty cool but also pretty weird. :)

            Lots of cool JS APIs to talk to phone hardware, but unfortunately in a time where JavaScript didn’t do async very well. async/await didn’t exist and Promises were so new that only half of the APIs supported it. All code was callback hell.

            My most exciting project was getting rid of the Cross-Site Scripting (XSS) we had all over the place. System apps, home screen. Everywhere. We fixed it with a combination of a poor Html sanitizer and an eslint rule I came up with, that’s still widely used at Mozilla: https://github.com/mozilla/eslint-plugin-no-unsanitized

            The project was successful enough that I brought it into Firefox after I moved teams and it helped finding and avoiding a couple of critical vulnerabilities in the browser.

            1. 9

              GameBoy Advance games.

              It’s was a cool, very quirky architecture: a GameBoy hardware sandwitched in GameBoy Color hardware with GBA built around it. A CPU with no cache, but directly addressable SRAM. No OS. Everything poked the hardware directly, and all kinds of clever/ugly hacks were encouraged.

              1. 1

                I’ve been working on a NES game for the last few months and I can absolutely second this description! It’s every kind of cool and fun and the architecture, despite its many quirks, is refreshingly simple. It’s the kind of quirky that invites clever hacks, not the kind of quirks that invites endless boilerplate code to impedance-match libraries together.

              2. 8

                At the beginning of my career when I was still very junior, I made an animated social media feed for Volvo’s V40 launch at the 2012 Geneva Motor Show. It appeared on a huge live display behind the car, and was broadcast around the world.

                My former colleague wrote the backend, and I hacked together the UI with jQuery.

                https://vimeo.com/79396437

                1. 6

                  Parsers.

                  Until five years ago, I worked for a network security company named RSA Security for seven years. Major part of my work was focussed on developing parsing engines for network events such as syslog messages, firewall logs, SNMP traps, etc. for their RSA enVision (now end-of-life) and RSA NetWitness products. Since there are thousands of event formats from hundreds of different network device products, the parsing engine had to be flexible enough, so that we could add new parsers without rebuilding the product. As a result, the actual event parsers were not hardcoded. Instead, the parsing engine would read the grammar from XML files (odd choice, I know), then build a finite-state automaton (the actual event parser) on the fly which would then parse actual network events. The customers or professional services could define new parsers by creating new XMLs and then the engine would load them and build new parsers from them.

                  To summarize, I was building parsers that would parse grammar which would then build concrete event parsers on the fly which in turn would parse the actual network events. It felt very cool back then.

                  Although I began working on it in 2008, the code for the products went all the way back to 1990s, when open source was still not mainstream in the software industry, so almost everything from regular expression engine to data structures like hash table, linked list, etc. were homegrown in C or C++. I did not have a formal degree in computer science. I had done my studies in another branch of engineering, so the computer science concepts behind the data structures and parsing technology were new to me. Further, it was the first time I worked on a product with more than 2 million lines of code, so the complexity of the product was quite overwhelming initially. Nevertheless, I did reasonably well, thanks to my decision of taking extensive notes about the code and architecture of the product as well as a great deal of guidance from senior engineers and scientists.

                  As a result of that experience, any parsing problem I come across now usually feels like a piece of cake. After having worked on a product with a couple of million lines of code, I find the modern trend of microservices that usually have a few thousand lines of code easy to work with. I owe a lot of the progress in my career to that experience in RSA and the exceptional people I met there.

                  1. 6

                    My first job, fresh out of university was for a retail management (headoffice and point of sale) operation.

                    One of their key advantages was their custom distributed database which meant stores could stay up-to-date without needing a constant internet connection. For instance at one client, every night a windows NT server at head office (with 4 56k modems plugged in) would start round-robin dialing each store, transferring sales data & transmitting new product info etc.

                    This code was chock full of manually checked mutexed and would regularly crash. My job was to cross-reference the stack trace we got from these crashes against the archived mapfiles (kept when we built a release for a client), to figure out the code path it was in when it crashed.

                    After 6 months of extremely gruelling study, I had tracked down three missing synchonisation points. The crash rate dropped by ~90% or better once that change went out.

                    1. 6

                      About a decade and a half ago, I worked on supporting people in using CLIM, a GUI interaction library for Common Lisp. People using it were mostly scientists, and the most prominent tool built with it was a massive tool suite to visualize genes and their associated research material. One of the test cases had me downloading the genome for Anthrax, which at the time I felt was the most metal thing I’d ever done or would ever do… so far, that’s remained true.

                      1. 5

                        About 10 years ago I made the first version of this general-purpose tool for musical video performances by integrating a real-time video playback and GPU-accelerated effects application with Ableton Live, the DAW that is used by most live performers. The fact that this hadn’t been done before (the integration options had only just become available) was very invigorating, as I guess were the crazy nights I got to spend on and behind large techno stages. I wouldn’t do it again today though ;)

                        1. 5

                          Working with a Nokia contractor, we were building application software and libraries for a Nokia model called 770. It was to be a relatively cheap “internet tablet” that ran a native Linux (with gnu libc and all), and it was manufactured in Europe. This was 5 years before the first iPad, and 2 years before the first iPhone. Some years after I left that place, this effort culminated in the Nokia N9 phone, which was the last of the line before the Microsoft fiasco.

                          Unfortunately, the original device especially was ahead of its time in a bad way. The hardware was underpowered, and Linux hadn’t been really optimized for mobile usage at that time. All the UI paradigms were in heavy flux, with nobody really knowing how they should actually be done. Also, it didn’t seem like Nokia was putting a lot of resources in the project.

                          1. 10

                            It was nothing too complicated, but I once refilled a small refrigeration unit that could reach up to -70°C. That was the coolest technology I have ever worked on.

                            Jokes aside, what is it lately with all these shallow questions? I really enjoy lobsters, but it’s more and more turning into another Hackernews with experience-bikeshedding, resembling more and more an alcoholics anonymous meeting where everyone gets to share his story but nothing is actually accomplished in the general discourse.

                            1. 2

                              Ya, these really should just be conversations in the IRC…

                              1. 2

                                It’s the submitter’s style, and it came up last time we had a metathread about this: https://lobste.rs/newest/mraza007

                                1. 1

                                  -70 degree is impressive. Do you remember which refrigerator that was? I’m interesting in such a thing for stress-testing embedded devices.

                                2. 4

                                  I’ve never worked on anything cool, and I can’t say that the specific things I worked on have helped me progress in my career. That said, I’ve been programming a long time, and this job has provided me with a lifestyle that makes me happy. That’s all I could have asked for.

                                  1. 3

                                    I am just starting my career (2 years in, first job), but I am currently employed building a high performance language runtime. I thoroughly enjoy the challenges.

                                    1. 3

                                      Anything public we can read about?

                                      1. 2

                                        Not yet :/ I’ll definitely post it here and on my website when we do make it public. It will, I think, be of fairly wide interest.

                                    2. 3

                                      I’ve been working on a descendant of the Drawbridge project from MSR for the past few years: https://www.microsoft.com/en-us/research/project/drawbridge/ There was a bunch of interesting ideas/tech that came out of the project, today we use the technology to run unmodified windows applications on Linux, namely the SQL Server Engine.

                                      (If you want more details, we gave a presentation at All Systems Go in 2019: https://www.youtube.com/watch?v=zq1WTLnntIg)

                                      The project has been amazingly educational for me, I had a bit of previous “systems” type experience. However this was a whole new level for me. It’s really expanded my horizons, from having a vague idea of how operating systems, runtimes, debuggers, etc work. To actually understanding, designing and implementing a bunch of these things in production. I hope it’s not the coolest thing I ever work on, but so far I think it takes the cake.

                                      1. 3

                                        I don’t think I have a #1 but these have been my favorite so far:

                                        • iOS jailbreak stuff
                                        • LuaJIT
                                        • Game engines
                                        • Gameboy emulator
                                        • 3DS homebrew
                                        1. 1

                                          Wow that’s pretty cool.I remember jailbreaking my iphone way back in 2010 and using cydia. It was very handy

                                        2. 2

                                          From a technological perspective:

                                          1. A TCP relay for a closed captioning system. For a school project, my friends and I worked on a TCP relay for a closed captioning company. Basically, closed captioners would make TCP connections to our software, and we’d broadcast the data they sent to downstream servers. The idea was to make it so that transitioning between captioners was much easier than used to. So our software acted as a bridge, only rebroadcasting a single captioner’s data at time, but had a web-socket based web-admin that allowed for picking which captioner was allowed through at a given time. That was probably my favorite Go project I’ve ever written, unfortunately, I don’t think it ever saw the light of day, as we weren’t able to implement it properly with the company in question.

                                          2. PISC. It’s in an indefinite hiatus now, but PISC was the first programming language project I made that reached a critical mass of being pretty useful. It’s had a lot of sub-projects, including an evalbot, a start on a game, and

                                          From a business perspective:

                                          1. I wrote some reporting libraries for a previous employer that were a lot more flexible than the previous system, and a lot of it was enabled by adding a .NET version of PostGresQL’s string_agg function to SQL server, which allowed me to define aggregate functions in SQL, and have them return lists of IDs in a string, which could be used for making reports clickable by supplying lists of database IDs, rather than duplicated SQL queries across code. It was a system that could only scale so far, but it made our reports much more reliably correct, and reduced code duplication a fair bit, which helped keep our clients happy.

                                          2. Less code that I wrote, rather than extremely useful tools that I’ve used again and again, but: LinqPad and ILSpy. LinqPad because it is truly the essence of a coding Swiss army knife, probably more than any other programming tool I know. I’ve made testing WebSocket clients, Rabbit Clients, I’ve used it for building out Unit tests, I’ve written a database driven tool that used a small SQLite database to help me analyze how we our database code was interacting with various triggers and and table definitions. And the .Dump() extension method is probably my favorite data structure visualizer ever. I’ve used it as a light-weight database management tool. It’s not perfect, but it’s something I definitely miss when I’m not using C#. ILSpy is something I’ve been using more recently. It’s a decompiler for .NET IL, which is super handy for “I found this random executable on a server or another developer’s laptop. What does it actually do?”. Which is a question I’ve had to answer a few times of late.

                                          1. 1
                                            1. 1

                                              I don’t know if this is the coolest, but I did spend a lot of time inside the A/V pipeline of Macs, iPods, and iOS devices, building the iTunes video store. I got a lot of exposure to unannounced devices, which is pretty cool, and access to the codec teams too.

                                              1. 1

                                                disk.frame is the most popular medium-data framework in R

                                                1. 1

                                                  I worked on the Disney Playmation toy-line as the build engineer for the main device firmware. I had a front-row seat to see an entire family of consumer electronic devices being designed, a semi-custom RTOS brought-up on the master unit, and an entire supply-chain worked up.

                                                  It’s still a weird feeling that code I wrote was burned to ROMs on a toy that was sold at major stores in the US. Plus I can prove it by knowing the hidden button presses to bring up the toy’s test mode..

                                                  1. 1

                                                    A historical debugger for .NET and Java. You really get to appreciate all the hard work people put into writing virtual machines and compilers (ahead of time and jit).

                                                    1. 1

                                                      Druid

                                                      I think this deserve a word here. I worked with it around 2014-2016. Now the project is passed to apache.

                                                      This was like a “secret weapon” for our team at that time. I worked on make a social media analytics (think a dashboard for twitter/FB etc…)

                                                      With that we were able to provide analysis in real time about a ridiculously big amount of data. It was very hard to configure correctly. But once done, it was incredible. You could click on a button and hundreds of aggregations were processed about ten of millions big json object in less than 200ms. That was a really cool tool.

                                                      Here is a short presentation about it: http://yogsototh.github.io/mkdocs/druid/druid.reveal.html#/

                                                      1. 1

                                                        Very cool I was thinking about building something similar