Threads for malexw

  1. 3

    Pretty good video.

    The one thing that confused me was when he opened up the NES and showed the metal box at the back and said that was the RF adapter. I always thought the little grey box that plus into the NES ( was the “RF adapter”, but apparently that was the “RF switch”. But the “switch” between channels 3 and 4 was on the back of the NES-001, so why was the little box called a “switch”? What was the little box for, just to give you a place to plug in your TV antenna?

    1. 4

      It allows you to keep your NES plugged in all the the time without interrupting your viewing of analog TV or cable. It accomplished this by switching between the two inputs. The NES (or “control deck”) input is used if RF signal there is detected, otherwise it passes through the input from the ANT connector.

      Nintendo documents the expected hookup. If you’re not interested in viewing analog TV (I don’t blame you in 2020) you can also directly connect the NES to the TV, as shown in the video, without the RF Switch.

      1. 5

        To add on to that, in the days of the ColecoVision and the Atari VCS, the RF Switch was actually a physical switch that the player had to flip from “antenna” to “game” when they wanted to play.

        This is what those switches looked like before the automatic NES RF switch was introduced:

        I once re-wired a Coleco Telstar pong clone from 1977 to use an NES RF switch, but it didn’t really work well. The game image did partially come through, but it was very noisy - almost as though the TV was getting signal from both the antenna and the game at the same time. I never investigated the cause, but it seems as though there is something important in the RF output from the NES that makes the automatic switch work.

    1. 1

      I’m working on fixing a car audio amplifier. It’s a neat device - 6 channels, 1000 W, class AB. It’s a Soundstream D’Artagnan, built around 2000. Apparently only around 100 of them were made. But part of the circuit has been damaged and I’m having trouble selecting appropriate replacement parts because I have no idea what this part of the board is supposed to be doing.

      I’ve posted a question on the Electronics Stack Exchange but don’t have any real solid leads yet.

      1. 4

        All over the place, as I am most weeks:

        1. 4

          dropgit sounds like a really interesting project. I’ve been thinking a lot lately about how documentation is done at companies and I’m beginning to believe that effective company-wide (or project-wide) documentation would be helped by making it easy for technical and non-technical people to collaborate in the same version control system. Would dropgit allow a developer to continue to use the CLI flows they’re familiar with while a customer support person uses a friendlier interface?

          1. 1

            I started to talk with some developers about the problems in documentation and prototype a solution. I thought about relying a lot in git, mainly because I like the idea of having documentation near the code as comments as javadocs or jsdocs, but it also enables other features: most developers I talked said the code is the ultimate true and they can’t trust the documentation because it gets out of sync, so imagine a tool that shows you the last commit date for a code block and the last commit date for that code block documentation. You would be able to easily see if the documentation is outdated and check what were the changes since the last time they were synced.

            1. 1

              I’ve often wished contracts in docx form and other legal documents could be in git. Something like dropgit enables that, and can encourage people to use alternate formats like richtext for easier diffing…

              1. 1

                Thanks! The idea is definitely to allow CLI and non-CLI people to work together. How that’s going to look exactly isn’t settled. I have some ideas but we will need to experiment and see what works. At this stage I am trying to see if I can gather enough interest in it to be able to put a serious amount of work towards it.

              2. 2

                My only wish for dropgit would be better commit messages… But it’s a really really cool project and idea! Signed up.

                Any chance of self hosting? I don’t put many repos on github/lab so I’d need to be able to point the back end to my own hosting…

                1. 2

                  There is a strong chance for self-hosting and it being completely open source. It’s very early stages for this idea but it’s grown out of working on and and part of the motivation is to be able to write software that can be re-used for these open source projects. I wrote a few more details on the GOSH forum if you are interested.

                  1. 1

                    Oh and about the commit messages: the idea is that you can change them if you wish. It will give you some time before they become “permanent”. So you can do it later or you can do it as you go. Dropgit is still very early though, mostly ideas at this point, I think I need to tweak the landing page to make that clearer.

                  2. 2

                    How is it that you support adding commit messages later? Do you re-write history on the remote?

                    1. 2

                      I definitely need to make this more obvious on the page, but this is a UI concept which is at the idea stage. I have some software written for that I can re-use for this but Dropgit as presented doesn’t exist yet.

                      My current thinking around the commit messages involves three different scenarios:

                      1. This is a shared repository, you are committing directly to master.
                        • Dropgit acts as your staging area that “settles” an hour after you make your last commit, then it syncs to the remote. You can only edit your commit messages before it’s “settled”.
                      2. This is a shared repository, you are committing to your own branch.
                        • Synced right away to the branch, Dropgit will push --force-with-lease amendments you make to the history of this branch.
                      3. This is not a shared repository, this is your repo and you are working on master.
                        • Synced right away and push --force-with-lease to master

                      Whether we really want to support all three scenarios and how to keep the interaction and interface simple enough bears some further design and experimentation.

                  1. 36

                    Or simpler without the need for additional <a>:

                    <h1 id="pageTitle">Title</h1>
                    <a href="#pageTitle">Back to top</a>.
                    1. 26

                      I don’t even understand why there was article written for this…It’s so obvious to anyone with any basic HTML understanding.

                      1. 19

                        That’s the thing - many website owners, especially those who use WordPress, don’t know basic HTML. So they may be inclined to install a plugin instead of putting a couple of simple lines of HTML into their theme.

                        Plugins aren’t inherently bad, but they add unnecessary bloat. Especially for something as simple as this.

                        1. 4

                          unnecessary bloat

                          … then maybe don’t use WordPress. ;)


                        2. 8

                          because if there is no article about it, this “obvious” knowledge becomes lost, overcomplicated solutions float to the top of search results, and everyone starts doing it the stupid way.

                          1. 8

                            As someone who recently had to start doing frontend work at my job, I just want to say this is non-obvious to me and I appreciate the article and comments.

                          2. 8

                            Apoarently “#top” or just “#” not only works (I remember #top from… The early days. Netscape 4.x?) - but it’s in the standard:

                            “Note: You can use href=”#top” or the empty fragment (href=”#”) to link to the top of the current page, as defined in the HTML specification.”


                            Where the spec is… I’d say more than a little obtuse:

                            Ed2: not sure why the author felt a need to define a html4 style named anchor for #top..

                            Ed: Also TIL: in HTML5 linking directly to an ID is preferred over an explicit named anchor.

                            1. 4

                              According to whatwg and MDN this is the preferred method

                              1. 3

                                Problem with that is it would take you to the title of the page, which isn’t necessarily the top of the page.

                                1. 7

                                  The id attribute can be given to any element so it could be done with a <header> or <article> element if that suits the page better. The name attribute on anchor elements is now considered obsolete.

                              1. 27

                                I created an e-mail address on my domain and I write journal entries as e-mail to that address. My client for writing becomes any device capable of sending e-mail, and I can use the search features built into gmail for searching through entries. As a bonus, entries are automatically timestamped.

                                I considered using paper, but I wanted something that would be easy to write to even if I was away from home and forgot my notebook. I also find typing entries to be faster than writing, and being able to search is a great feature.

                                1. 9

                                  This is frankly brilliant.

                                  1. 6

                                    Awesome idea. For gmail you could use and then setup a filter to to go a special folder

                                    1. 3

                                      This is one of those ideas that make you go “why didn’t I think of this?”.

                                      1. 3

                                        I’ve been meaning to switch my blog over to using a similar mechanism.

                                        1. 4

                                          Reminds me of posterous ~10 years ago

                                        2. 2

                                          I really like this.

                                        1. 2

                                          I also prefer the style you describe: a high-level overview in a README with links to more documents, which I put in a doc/ folder. I like it for a few reasons.

                                          • Documentation ships with the code. If you have the code, you have the docs
                                          • It forces you to put docs under some form of source control
                                          • Changes to docs can be done in the same PR, or ideally even the same commit, as changes to code
                                          • It forces docs to get code reviewed (assuming you do code review). This is great because an extra editor helps make sure the documentation is useful, but it also ensures that at least one other person knows that the documentation for that component exists.
                                          • If you get a PR that doesn’t include any changes to docs, that’s an indicator of a problem - either the documentation hasn’t been updated, or there is no documentation at all for these components.

                                          Another thought I’ve had recently is if we as an industry should be bringing more ideas from literate programming into our daily practice. I think this could especially help onboard new contributors, but I don’t know how much extra work this would add, and if the trade-off is worth it. It’s something I’d like to explore more.

                                          1. 3

                                            I’m looking for software developers on location in Toronto, Canada

                                            We’re building a system that enables cryptographically secure chain-of-custody on distributed infrastructure without a ledger. We’re using some ideas from the cryptocurrency world, but we’re a traditionally-funded startup with quality investors and paying customers.

                                            I’d love to hear from developers interested in DevOps work, front-end web and mobile in ClojureScript and React Native, and backend developers who are comfortable with distributed systems. Previous experience with Clojure, cryptography, and security would be an asset.

                                            Let’s talk over e-mail - my address is in my profile

                                            1. 3

                                              “without a ledger.”

                                              That part sounds refreshingly different after all the blockchain startups in that space.

                                            1. 4

                                              I’ve been trying to develop a side hobby of building guitars. This weekend I’m working on a reproduction of a 6th century Germanic instrument called a lyre. My biggest challenge over the weekend will be gluing the two halves of the soundboard together, which so far I’ve been failing at. It requires planing two 24” long boards along a 1/8” wide edge to get a perfectly flat seam. It has led down a pretty deep rabbit hole of learning about hand planes and blade sharpening, but it’s all pretty rewarding.

                                              1. 3

                                                In 2015 we dropped Grunt and Gulp in two projects I was leading in favour of npm scripts and that decision has worked out well for us. Both projects are for front-end applications with node api servers. We support live reloading in dev, pull down translations from transifex during builds, generate language-specific js bundles, and can build our native (Cordova) apps for iOS and Android all from our npm scripts. Everything you’d expect do to in any other build tool.

                                                What I prefer about plain npm scripts is that there’s no tool-specific gotchas or knowledge required to make improvements to our build system. Well, that’s not entirely true because we use Webpack for our front-end bundles. Aside from Webpack, then, if you can write node.js, you can add new build commands without having to wrap your head around the syntax for other build systems. This saves us some effort when onboarding new employees, and I like that I don’t have to keep going back to the Grunt or Gulp docs to figure out how to do something new.

                                                Another great plus for me is that we no longer have to install any npm packages globally, so it’s really easy for us to change Cordova versions between projects or testing new versions. Just git checkout and yarn install. The npm scripts can run Cordova commands as if it was installed globally, so we don’t have any issues from accidentally building our apps with the wrong version of Cordova anymore.

                                                1. 12

                                                  Ok, I’ll ask a stupid question. What does a great deployment pipeline look like?

                                                  1. 10

                                                    It depends on what you’re trying to deploy and what constraints you have; there isn’t one magic bullet. One pipeline I was especially proud of for a Python app I wrote at Fog Creek worked like this:

                                                    1. Create a pristine dump of the target version of the source code. Say the revision is 1a2b3c4d. We used Mercurial, not Git, so the command was hg archive -t tbz2 -R /path/to/bare/repo -r 1a2b3c4d, but you can do the same in Git.
                                                    2. Upload this to each server that’ll run the app, into /srv/apps/myapp/1a2b3c4d
                                                    3. Based on the SHA1 of requirements.txt, make a new virtualenv if necessary in /srv/virtualenvs/<sha1 of requirements.txt> on each server hosting the app.
                                                    4. Copy general configuration into /srv/config/myapp/1a2b3c4d. Configs are generally stored outside the actual app repo for security reasons, even in a company that otherwise uses monolithic repositories, so the SHA here matches the version of the app designed to consume this config info, not the SHA of the config info itself. (Which should also make sense intuitively, since you may need to reconfigure a running app without deploying a new version.)
                                                    5. Introduce a new virtual host, 1a2b3c4d.myapp.server.internal, that serves myapp at revision 1a2b3c4d.
                                                    6. Run integration tests against this to make sure everything passes.
                                                    7. Switch default.myapp.server.internal to point to 1a2b3c4d.myapp.server.internal and rerun tests.
                                                    8. If anything goes wrong, just switch symlinks of default.myapp.server.internal back to the old version.

                                                    Now, that’s great for an app that’s theoretically aiming for five-nines uptime and full replacability. But the deploy process for my blog is ultimately really just rsync -avz --delete. It really just comes down to what you’re trying to do and what your constraints are.

                                                    1. 7

                                                      I doubt you’ll find consistent views, which makes that the opposite of a stupid question.

                                                      My ideal deployment pipeline looks something like the following:

                                                      • Deployable artifacts are built directly from source control by an automated system (e.g. Jenkins).
                                                      • Ideally, some sort of gate is in place to ensure code review has occurred before a deployable artifact is built (e.g. Gerritt, though that project is very dogmatic and while I don’t disagree with it, I don’t strongly stand behind it either).
                                                        • CI builds off unreviewed commits are fine, but I would consider them a developer nicety, not a part of the deployment pipeline.
                                                      • Deployable artifacts are stored somewhere. Only the build tool should be able to write to it, but anyone should be able to read from it. (I don’t care what this looks like. Personally, I’d probably just use a file server.)
                                                      • Deployment into a staging environment is one-click, or possibly automatic, from the artifact store.
                                                      • Deployment into a production environment is one-click from the staging environment. The application must have successfully deployed into staging to be deployed into prod. Ideally, the application must go through some QA in staging to be deployed to prod, but that’s a process concern more than a technical one.
                                                        • Operational personnel need to be able to bypass QA (in fact, bypass almost all of this pipeline) in outage situations.

                                                      Note that I’m coming at this from the server-side perspective; “deploy into” means something different for desktop/client software, but I think the overall flow should still work (though I’ve never professionally developed client software, so I don’t know for sure).

                                                      1. 3

                                                        We do:

                                                        1. Tests are ran, gpg signatures on commits checked, we are ready to deploy!
                                                        2. Create a clean export of the revision we are deploying: hg archive -r 1a2b3c4d build/
                                                        3. Dump the revision # and other build info into a version file in build/ this is in JSON format as a {}.
                                                        4. Shove this into a docker image: docker build -t $(JOB_NAME):$(BUILD_NUMBER) . and push it to internal docker registry.
                                                        5. Update nomad(hashicorp product) config file to point to the new $(BUILD_NUMBER) via sed: sed -e "s/@@BUILD_NUMBER@@/$(BUILD_NUMBER)/g" $(JOB_NAME).nomad.sed >$(JOB_NAME).nomad.
                                                        6. Do the same as previous step but for the environment we will be running in (dev, test, prod) if required.
                                                        7. nomad run $(JOB_NAME).nomad

                                                        Nomad will handle dumping vault secrets, config information, etc from the template directive in the config file. So Configuration happens outside of the repo, and lives in Vault and Consul.

                                                        You can tell by the env. variables we use Jenkins :) different CI/CD systems will have different variables probably. If unfamiliar with Jenkins, BUILD_NUMBER is just a integer count of how many builds jenkins has done for that job. JOB_NAME is just the name you gave it inside of Jenkins for this job.

                                                        1. 2

                                                          This is way off topic, but I’d love to hear why you went with Nomad and how it’s been working for you. It seems to fill the same niche as Kubernetes, but I hear practically nothing about it—even at shops using Packer, Terraform, and other Hashicorp products.

                                                          1. 4

                                                            We started with Nomad before Kubernetes was a huge thing, i.e. we heard about Nomad first. But I wouldn’t change that decision now, looking back. Kubernetes is complicated. Operationally it’s a giant pain. I mean it’s awesome, but it’s a maintenance burden. Nomad is operationally simple.

                                                            Also Nomad runs things outside of docker just fine, so we can effectively replace supervisor, runit, systemd, etc with Nomad. Not that I remotely suggest actually replacing systemd/PID 1 with Nomad, but that all the daemons and services you normally run on top of your box can be put under nomad, so you have 1 way of deploying, regardless of how it runs. I.e. Postgres tends to work better on bare hardware, since it’s very resource intensive, but with the Nomad exec driver it runs on bare hardware under Nomad perfectly fine, and gives us 1 place to handle logs, service discovery, process management, etc. I think maybe the newer versions of Kubernete’s can sort of do that now, but I don’t think it’s remotely easy, but I don’t really keep up.

                                                            But mostly the maintenance burden. I’ve never heard anyone say Kubernetes is easy to setup or babysit. Nomad is ridiculously easy to babysit. It’s the same reason Go is popular, it’s a fairly boring, simple language complexity wise. This is it’s main feature.

                                                            1. 2

                                                              Thanks for the write up! Definitely makes me want to take another look at it.

                                                          2. 1

                                                            Do the same as previous step but for the environment we will be running in (dev, test, prod) if required.

                                                            Could you elaborate on this step? This is the on that confuses me the most all the time…

                                                            1. 2

                                                              Inside of Jenkins job config we have an ENV variable called MODE and it is an enum, one of: dev, test, prod

                                                              Maybe you can derive it from the job-name, but the point is you need 1 place to define if it will run in dev/test/prod mode.

                                                              So if I NEED to build differently for dev, test or prod (say for new dependencies coming in or something, I can.

                                                              That same MODE env variable is pushed into the nomad config: env { MODE = “dev” } It’s put there by sed, identically to how I put the $(BUILD_NUMBER).

                                                              And also, if there are config changes needed to the nomad config file based on environment, say the template needs to change to pull from the ‘dev’ config store instead of the ‘prod’ config store, or if it gets a development vault policy instead of a production one, etc. I also do these with sed, but you could use consul-template, or some other templating language if one wanted. Why sed? because it’s always there and very reliable, it’s had 40 years of battle testing.

                                                              So that when the nomad job starts, it will be in the processes environment. The program can then, if needed act based on the mode in which it’s running. Like say turning on feature flags under testing or something.

                                                              Obviously all of these mode specific changes should be done sparingly, you want dev, test, prod to behave as identically as possible, but there are always gotchas here and there.

                                                              Let me know if you have further questions!

                                                              1. 2

                                                                Thank you very much! Helps a lot!

                                                          3. 2

                                                            What does a great deployment pipeline look like?

                                                            I do a “git push” from the development box into a test repo on the server. There, a post-update hook checks out the files and does any other required operation, after which is runs some quick tests. If those tests pass, the hook pushes to the production repo where another post-update hooks does the needful, including a true graceful reload of the application servers.

                                                            If those tests fail, I get an email and the buggy code doesn’t get into production. The fact that no other developer can push their code into production while the codebase is buggy is considered a feature.

                                                            Since I expect continuous integration to look like my setup, I don’t see the point of out-of-band testing that tells you that the code that reached production a few minutes ago is broken.

                                                            1. 1

                                                              The setup we use is not even advanced, but simply resilient against all the annoyances we’ve encountered over time running in production.

                                                              I don’t really understand the description underneath “the right pattern” of the article. It seems weird to have a deploy tree you reuse everytime?

                                                              Make a clean checkout everytime. You can still use a local git mirror to save on data fetched. Jenkins does this right, as long as you add the cleanup step in checkout behaviour.

                                                              From there, build a package, and describe the environment it runs in as best as possible. Or just make fewer assumptions about the environment.

                                                              This is where we use a lot of Docker. The learning curve is steep, it’s not always easy, there are trade offs. But it forces you to think about your env, and the versions of your code are already nicely contained in the image.

                                                              (Another common path is unpacking in subdirs of a ‘versions’ dir, then having a ‘current’ symlink you can swap. I believe this is what Capistrano does, mentioned in the article. Expect trouble if you’re deploying PHP.)

                                                              I’ll also agree with the article that you should be able to identify what you deploy. Stick something produced by git describe in a ‘version’ file at your package root.

                                                              Maybe I’m missing a lot here, but I consider it project specific details you just have to wrestle with, in order to find what works. I’ve yet to find a reason to look into more fancy stuff like Kubernetes and whatnot.

                                                              1. 1

                                                                I think that the point kaiju’s thread wants to make is that you shouldn’t be deploying from your local machine, since every developers environment will differ slightly and those artifacts might cause a bad build when sent to production. I believe the normal way is to have the shared repo server build/test and deploy on a push hook, so that the environment is the same each time.

                                                              1. 1

                                                                This article is nearly 2 years old now. Am I missing something?

                                                                1. 1

                                                                  This article ( is quite recent, and refers back to the link here, so I’m guessing this is what prompted this post?

                                                                1. 7

                                                                  I play guitar and bass. Or really, I learn to play guitar and bass. I played saxophone for 10 years previously, and by comparison there seems to be so much to learn about and around guitar that I’ll be a perpetual student. This might be a long shot, but if there are any other musicians here from the Toronto area, I’d be down to meet up and jam.

                                                                  The other thing I’m interested in is climate change. A couple months ago I decided that understanding and taking some action against climate change is important to me. Right now, I’m doing a lot of research and reading to try and understand what kinds of problems I might be able to help with.

                                                                  1. 3

                                                                    I’d love to hear your conclusions from your climate-change impact investigations whenever you’ve made them :) This is also something I’m interested in.

                                                                    1. 1

                                                                      For sure! One of the things that I’ve noticed from investigations so far is that there are a lot of people who care about this but don’t know what to do. I’m following a 100:10:1 process right now, generating a big list of ideas, then picking a smaller subset to research in depth.

                                                                      An interesting place to start might be this post from Bret Victor:!/ClimateChange

                                                                      1. 1

                                                                        Hey, did you make any progress here? :)

                                                                  1. 9


                                                                    1. 14

                                                                      Ah, another Javascript developer I see!

                                                                      1. 5

                                                                        I’m here to listen if it’ll help.

                                                                        1. 6

                                                                          It’s light-hearted, don’t worry; thank you ❤️

                                                                      1. 7

                                                                        We recently had a potential client approach our sales team about purchasing our SaaS product, but the client had a hard requirement that we worked on IPv6-only. We were IPv4-only at the time, so we decided to investigate how easy it would be to add v6 support. Turns out, it’s a mixed bag.

                                                                        It was easy enough to add v6 support to our application - all we needed to do was add a AAAA record so that our domain properly resolved to the v6 address of our load balancer. But a number of the external services that we depend on haven’t migrated yet, so our application won’t work properly unless we can convince them to migrate as well, or we add a proxy layer to our stack.

                                                                        And it turned out that testing our changes was a challenge because the ISP for our office doesn’t support IPv6. In fact, my ISP at home doesn’t support v6 either. We also tried from a BrowserStack environment, but their VMs are IPv4-only as well. (One thing we have yet to try is an Amazon Workspace.)

                                                                        In the end, we were able to test when one of our developers noticed that his phone supported IPv6. We created a wi-fi hotspot on the phone, connected to that, and turned off IPv4 on the test machine. But wow, I’m surprised at how far away the world still seems to be from supporting IPv6.

                                                                        1. 1

                                                                          As another data point, when I was at RIM, we built the first touchscreen BlackBerry - the much maligned Storm - in 9 or 10 months, if I recall correctly. A normal dev cycle for a device at that time was about 12 months.

                                                                          The reason for the rush on the Storm was because the iPhone was an AT&T exclusive, and Verizon was pressuring us to make an iPhone competitor for them as soon as possible.

                                                                          1. 23

                                                                            My favorite tactic for “killing” these is (to use the example from the post):

                                                                            # e.g. "hello everyone" => "Hello Everyone"
                                                                            def upcase_words(sentence)
                                                                              sentence.split(' ').map!{|x| x = x[0..0].upcase << x[1..-1]}.join(' ')

                                                                            In an ideal world the name is clear enough that someone reading the code at the call site understands what’s happening, and if they don’t the example alongside the definition hopefully gets them there.

                                                                            1. 6

                                                                              You mean

                                                                              # e.g. "col1\tcol2\n    ^ woah" => "Col1 Col2 ^ Woah"

                                                                              Naming it hurts in this case, because the function does not do what you named it (e.g. in a string of tab-separated values, or a string where multiple spaces are used for formatting). If you had to name it, it would be better named as split_on_whitespace_then_upcase_first_letter_and_join or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.

                                                                              The best solution is one that embodies exactly what you intend for it to do, i.e. substitute the first letter of each word with the upper case version of itself. In Ruby, that would be:

                                                                              sentence.gsub(/(\b.)/) { |x| x.upcase }
                                                                              1. 6

                                                                                If you had to name it, it would be better named as splitonwhitespacethenupcasefirstletterandjoin or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.

                                                                                I disagree. You should name functions and methods based on what they’re supposed to do. If it does something else, then everyone can see it is a bug.

                                                                                1. 1

                                                                                  I don’t agree with your naming system. I think the name of your function should describe what it does instead of how it does it. If your function name describes how it’s implemented, you have a leaky abstraction.

                                                                                2. 6

                                                                                  Among other benefits, giving it a name means we can explode the code without worrying about a few extra lines in the middle of the caller.

                                                                                  words = sentence.split ' '
                                                                                  words.each { |w| w[0] = w[0].upcase }
                                                                                  sentence = words.join ' '

                                                                                  Introducing a variable called ‘words’ is a solid hint about the unit we’re working with. We may not want to pollute the caller with a new variable, but in a subroutine that’s not a problem.

                                                                                  1. 3

                                                                                    Naming it does help in this case, but mostly because the reader no longer has to scrutinize over what it’s actually doing. Isn’t this sort of like polishing a turd?

                                                                                    1. 1

                                                                                      That only masks the issue.

                                                                                      Any maintenance on that line will still have the same problems, whereas refactoring it to split it up into smaller segments AND giving it a name avoids that issue.

                                                                                      1. 3

                                                                                        It gives the reader a good frame of reference to what the function’s doing. Context helps a lot when trying to read code, and although this isn’t as readable as it could be yet, it’s definitely a lot more readable than minus the function signature.

                                                                                      2. 1

                                                                                        A kind of offtopic question based on this comment.

                                                                                        Would I use Coq to prove this function?

                                                                                      1. 1

                                                                                        I’m continuing trying to teach myself about web performance testing. It’s weird that there are tons of blog posts and books about unit testing, integration testing, etc, but almost nothing I can find about performance. Anyway, the big breakthrough last week was a one line change. We were already using New Relic for monitoring, and just updating from a year-old version of their node library gave me a bunch of more detailed measurements that more accurately detail where our app spends most of its time.

                                                                                        Outside work, I’m still prepping for my talk at the local WebGL meetup. Last year I read Robert McKee’s book about screenwriting, Story, so I’m curious to see if applying some of those ideas to my presentation makes a more compelling talk.

                                                                                        1. 2

                                                                                          Might be worth checking out Gatling for web performance testing. I’ve had good experience with it.

                                                                                          1. 1

                                                                                            Thanks for the recommendation, I’ll have a look!

                                                                                        1. 13

                                                                                          Of these books I’ve only read “Silence on the Wire” by Michal Zalewski. But that one alone I think is worth the $15 minimum of the bundle. It’s all about side-channel attacks and discusses things like timing attacks, monitoring network traffic from the blinking LED on the back of the NIC, or even reconstructing passwords from an audio recoding of keyboard strokes.

                                                                                          bunnie’s book “Hacking the Xbox” is a classic, though he’s also made it available for free here:

                                                                                          I’d also like to finally read “Hacking: The Art of Exploitation” and “The Smart Girl’s Guide to Privacy” - two books I’ve heard good things about, but never did get around to reading.

                                                                                          The rest of the books I haven’t heard of, but just those 4 I mentioned are probably worth your time.

                                                                                          1. 7

                                                                                            The books available look like interesting reads. I wish I had the cash to get this bundle.

                                                                                            1. 22

                                                                                              I got a little raise at work last week so lemme share the love. Check your e-mail!

                                                                                              1. 7

                                                                                                OMG thank you so much, this is really amazing. I am so excited to read them!

                                                                                              2. 6

                                                                                                It’s pay what you want for the first set of books, or $15 for all of them. Heck, I feel like I shouldn’t say this, and I don’t know you, but if you really have <$15 and promise to read at least part of each book I’ll buy you the whole bundle if you contact me privately.

                                                                                                1. 5

                                                                                                  Thank you for the kind offer. It really makes me feel like the world is a better place with so many generous people. I was sent a bundle in the mail, and I will definitely read every book. I have been dealing with some serious health issues and it is a struggle financially.

                                                                                                2. 3

                                                                                                  Some of the books are released under the Creative Commons and are available online, released for free legally by their authors/publishers. I know Automate the Boring Stuff ( and Hacking the Xbox ( are among those.

                                                                                                  1. 4

                                                                                                    Thanks for the links! I am an advocate of free culture and it is great to know that more people are releasing works as CC.

                                                                                                1. 4

                                                                                                  First time posting in this thread for me!

                                                                                                  At work, I’m trying to give myself a crash course in web application performance measurement, focusing mainly on backend for now. I’m really starting from zero here. I know vaguely that I want to know how long the backend takes to respond to each request, and I’d like to know how long the requests to each of the endpoint’s dependencies take. I’d also like to figure out what specifically causes the exponential increase in performance times as the API servers reach maximum capacity, and where the bottleneck comes from. It seems that there are almost no resources around for explaining best practices for making these kinds of measurements.

                                                                                                  Outside of work, I’m busy preparing for a talk I’ll be giving at my city’s WebGL meetup next week. I’m working on documenting and refactoring a project that I wrote last year, which I’ll be using to explain how I created some of the effects I used.