1. 2

    There’s also gistup which converts the directory to a git repository and includes gistup-open to open the gists in your browser (like hub open)

    https://github.com/mbostock/gistup

      1. 6

        This article feels like it’s rationalizing trading window.{{library}} for context.{{library}}. In the end, we avoid both scenarios as they’re polluting a namespace without being explicitly used in the file. After 20 different dependencies, it gets unwieldy and can lead to unnecessary conflicts.

        Instead, we prefer per-file requires/includes (which are mentioned in the article then handwavingly dismissed) as it keeps everything explicit. In the case of removing a common dependency across many files, linters will typically complain about unused dependencies and git grep/sed are our friends. In my experience, this rarely is an issue although sometimes I would wish for a custom DSL in tests (which I’ve since learned to do pre-emptively)

        1. 1

          This is like the “monkey-patch everything onto jQuery.prototype even if it doesn’t have anything to do with jQuery” fever that only recently broke.

          1. 13

            I was hoping for a section on “do use these things: ENV variables that hold filenames that contain secrets” or something like that. I don’t use docker, but I would like to keep secrets out of environment variables. What are good ways to do that?

            1. 7

              Generally you bake an encrypted file into your image, that file is read and decrypted on app start, you can fetch a key to decrypt the file from vault or similar.

              If you use kubernetes, it supports exposing secrets as files as prescribed by the post.

              1. 3

                A great tool for this is SOPS which supports both PGP and AWS’s KMS:

                https://github.com/mozilla/sops

            1. 2

              Note that this is a programmer-lawyer, not just any copyright lawyer. So it’s not like an “outsider’s perspective”, it’s the perspective of someone who programs (and is well in sync with the open source community). That person just happens to also be a lawyer so know exactly what they are talking about.

              Kyle’s past posts on open source licenses are pretty good too.

              1. 2

                I was trying to figure out how to best state that in the title. Technical lawyer felt off to me, programmer-lawyer feels much better ?

              1. 3

                This fails to address the key security issues homakov failed to address when he proposed the same thing:

                • Email is not secure by design. It can be secure but it isn’t in all cases. This solution basically pushes the security requirements onto email services, with a “not my problem bro”

                • If someone breaches your mailbox, they can have persisted access to this service without you knowing. The password reset attack vector means you will know about any breach because you won’t be able to login with your existing credentials.

                Also, claiming that password managers could be replaced by a browser extension (that doesn’t exist currently) is like saying I don’t need a scuba tank, I could breathe underwater with a set of gills transplanted from a shark, if only someone would invent that procedure.

                This also doesn’t work well for mail servers that use grey listing, or for any of those times when traffic between the server, your email server and you is negatively affected

                1. 2

                  After some searching, I think you’re referring to https://sakurity.com/blog/2015/04/10/email_password_manager.html is that right?

                  With respect to pull-based breach detection, you bring up a good point. Although, it’s more of an afterthought for sites with a large turnaround time like ours (i.e. could be up to 7 days from last visit due to no job search activity). But in the general case, you’re right that this should be solved.

                  At its root, we want to tell the user things happened when they revisit the site (a pull-based notification if you will). What are your thoughts on a solution like listing new sessions on a per-device basis? (e.g. revisit site, receive standalone page saying to confirm all other new sessions are valid, happens on every device so 1 device can’t auto-approve all others)

                  1. 2

                    Yes thats the post I’m referring to, which was followed by a lengthy back-and-forth on Twitter, where the author freely admitted that mailbox security isn’t fantastic, but that it isn’t his concern.

                    Honestly I don’t think saying “someone logged into your account from X, was this you?” is going to work. Most users will soon get so sick of the constant messaging they’ll just blindly click OK.

                    As I tried to explain to Homakov, passwords fundamentally do work - they just have shitty UX in most cases.

                    Try using Safari + iCloud Keychain for a few days. It’s a great example of how well a password manager can operate.

                    1. 2

                      Alerts like this can still be helpful, as long as they’re infrequent - for example, only from devices you haven’t used before.

                      1. 2

                        I’m confused where the repetition would come from such that users get complacent to the messaging. It should only be shown when there’s a new device login. We persist sessions for 2 weeks since its last activity and while our application has a large turnaround, it still should be within 2 weeks.

                        With respect to passwords fundamentally working/password managers, I think you might be missing the goal of this movement. While I use a password manager, I can’t enforce the average non-technical user to do the same. This movement is to minimize the impact of a leak because we don’t live in a perfect world and error-prone humans are the writing authentication code/maintaining the servers. If it were to catch on, then it would have a cascading effect where passwords are reserved for a handful of services (e.g. email providers) – enforced by the services themselves.

                        For reference, my search for “How common is password reuse” came up with this 2015 survey sampling 2030 US residents (unable to find anything larger). It shows 59% of people (in this sample set) reuse passwords.

                  1. 9

                    A number of times I’ve signed up on a site where the account can’t be logged into until the verification link comes through e-mail, and the e-mail either never arrives or comes with significant delay (20-60 minutes). If I had to go through that every time I logged in, I would not use that site.

                    1. 4

                      That was the kind of insight I was blind to and looking for from writing this article. Thanks =)

                      1. 1

                        even if you rarely logged in, because the site allows you to stay logged in?

                        1. 2

                          Some people choose not to stay logged in to every site all the time. Either by logging out when done, or just blunt daily cookie clearing.

                          1. 1

                            If we’re starting a list of people who clear their cookies regularly, I’m on it. Irregularly, but on average every other day or so. Cookies accumulate a lot of tracking information very fast, so it’s only reasonable to wipe them regularly.

                      1. 1

                        The linked CI site, http://spellingci.inakalabs.com/, seems to be behind HTTP authentication =/

                        1. 16

                          Being a specialist and generalist aren’t mutually exclusive. I prefer to follow Valve’s T-shaped model approach or Kent Beck’s paint drip people. Specialize in at least 1 area and have broad general knowledge in others. This way you can gather more specialties from peers/experience, help out on general tasks, and let your peers grow by sharing your specialty knowledge.

                          References:

                          1. 0

                            See, if we wanted to make the world a better place, everyone would pick a month to submit patch requests to all open source projects to replace their config files with JSON and to add a converter for legacy configs.

                            And yes, I also want a pony. And world peace.

                            1. 30

                              Wait, making every configuration file with JSON would make the world a better place? I think it would turn it far worse! A lack of comments in my configuration file makes future me confused and frustrated at past me.

                              1. 5

                                When I’m stuck with JSON-for-config, I’ll sometimes duplicate keys in the file - putting the comment in the first one. Every JSON decoder I’ve used so far ignores the first value and takes the second one.

                                I’m well aware of how insane that sounds, but I’d rather have comments than sensible files.

                                1. 3

                                  I do something similar but make it an _{{key}}-comment (e.g. password has a sibling _password-comment before it). This works great in config files as it can be ignored by the application. It does fall apart in something like package dependencies though =(

                                  1. 3

                                    I do something similar, except I duplicate a specific key, so order is not important:

                                    {
                                      "//": "Here's a comment",
                                      …
                                      "//": "Another comment"
                                    }
                                    

                                    But yeah, I’d rather just avoid JSON for such things.

                                    1. 1

                                      Some of these problems are addressed by JSON5: http://json5.org/ There is obviously the whole https://xkcd.com/927/ problem, but IMO it’s not as bad as it is with something dramatically different like TOML.

                                      But in general I think it’s much better for a whole host of reasons to use Lua for configuring end-user applications.

                                      1. 1

                                        JSON5

                                        Kind of reminds me of UCL.

                                    2. 2

                                      One problem with this, and with @alva’s solution below, is that the comments won’t survive deserialization/serialization in a meaningful way. @twolfson’s solution resolves that, though there’s still no guarantee the comment key is serialized anywhere near the key it is meant to be commenting.

                                    3. 1

                                      Having all configuration files in the same format would make the world a better place.

                                      JSON does have some technical issues though, such as inability to represent all floating point values.

                                      1. -1

                                        Is the world really betterr off having different formats for grub, xorg, Apache, nginx, rust, npm, and all the other myriad ways of storing the same flavor of data?

                                        1. 7

                                          No, but there are enough standards for this that don’t suck as much for configuration data as json.

                                          1. 1

                                            yeah, all three of the systems suggested in the OP are better than json. i believe yaml has some parsing issues, but the other two look very clean and usable.

                                            1. -1

                                              Other than the common “muh comments” complaint, why do you think JSON sucks for configuration data?

                                              1. 12

                                                Two reasons that come to mind for me are: you get a lot of syntactic ceremony that other formats don’t require (every file starting with ‘{’, string quoting, commas between list items, etc.) and the selection of types is odd. Your JSON parser is going to convert bare numbers to numbers but you have to quote your strings. What do you do if you have other types, like URLs or date/times or something particular to your application? You’ll have to quote them as strings and then do a second pass over your JSON configuration object to convert it to something else. What happens if your language has a more interesting suite of numeric types than JSON does? Do you have to tell people to quote their numbers so that you can parse them properly? The behavior is hidden from you by your JSON parser, so you aren’t likely to be able to detect when something like this has gone wrong.

                                                I like my code to require a certain amount of ceremony to catch problems before running, but I like my configuration files to be fairly lenient; if I can recover something from them, I can alert the user that the parse failed on these items or whatever and proceed. These options get narrowed when you conflate a programming language with a configuration language.

                                                In fact, even my JSON parser, I want it to be strict when I’m dealing with user input or form submissions or whatever, but if I’m reading a file like ~/.foorc, I want to be lenient. Does your JSON parser have options for that?

                                            2. 5

                                              Just because there is not one standard it doesn’t mean that JSON is a suitable one. In fact I would even prefer XML over JSON for this particular usecase, just because it has comments.

                                          2. 7

                                            JSON config files?! Do you have a shrine to Satan in your house as well?

                                            1. 1

                                              I might not mind if most json parsers sucked totally when it comes to error messages. rather i would prefer a good config parser which generates error messages and documentation WITH EXAMPLES.

                                            1. 12

                                              Feelings:

                                              • Things should be colorless. Email the output if it failed? Better be colorless. TERM set up wrong due to a screen -> ssh -> screen loop de loop? Better be colorless. Another command grepping/sedding output? Better be colorless..
                                              • Never use set -e, check each command’s success/failure individually by using mkpipe/mkfifo. as bash does not descriminate on piped failures (just $? == some return code). Rc and it’s related shells set $status to an array and then you can see which commands of the piped command exited with which statuses. You’ll at least want to set -o pipefail as well if you depend on the early exit behavior of bash.
                                              • Never source scripts, have them output commands that you eval later (or don’t do it all). The reason for this is that it leads to all the same hell that cascading style sheets do.

                                              My suggestion: use a real programming language, avoid shell scripts to create programmatic environments. Shells should always be the top imperative layer that controls execution flow, never the framework.

                                              Sorry if this is a harsh reaction, obviously it’s neat to see all the colors and spinning doo-dads, but I think it’s a bad choice.

                                              1. 5

                                                I prefer set -euo pipefail to mkpipe/mkfifo.

                                                Additionally, if you’re writing a shell script without running shellcheck over it, you are either a much better programmer than I am or you’re asking for trouble.

                                                For instance, shellcheck finds the following in the script you posted:

                                                • You’ve used /bin/sh rather than bash, but the [[ ]] syntax is undefined in posix SH
                                                • You’ve used printf with variable interpolation - might as well just use echo or use printf with %s and multiple arguments
                                                • You’ve defined GREEN but not used it
                                                1. 1

                                                  If you need shellcheck for your shell script, then you should probably use a better language.

                                                  Does pipefail let you check the exit status of individual portions of a piped command, or does it just bail out?

                                                2. 5

                                                  I did find the juxtaposition of the sombre and sensible Rule of Silence section with the craaazy colouring and gratuitous emoji in the script immediately following it more than a bit weird. Though on second reading it’s not as bad as my first impression led me to believe. The emoji look cute, the hourglass in particular, but I think I prefer an easily grep-able Warn/Info/Err in general.

                                                  1. 4

                                                    In my opinion, shell scripts are a great way to get a development environment up and running. They are cross-developer as people use them daily and don’t require external bindings themselves.

                                                    With respect to your points:

                                                    • Colorless can be solved by fallback logic to use empty strings if the terminal doesn’t support colors
                                                    • set -e is practical as developers could easily forget to check error codes. In every other language, we throw when there’s a syntax error or missing function. I think everyone expects the same from sh/bash
                                                      • I think you would prefer to use set -o pipefail to catch pipe failures as well
                                                    • I concur with not sourcing scripts, polluting scripts variables from other variables is a pretty bad idea =/
                                                    1. 2

                                                      The coloring fallback logic only works if you have your TERM and pipes set up correctly. This fails in many, oddly practical scenarios to the point that many tools have “–nocolor” or other options because the fallback logic fails many users.

                                                      set -e and set -o pipefail don’t help the script writer catch and do anything with the failures. In many cases you should be printing what failed, and why it failed. Instead, your script ends early, skipping simple cleanup and forcing the person who ran the script to know that it failed, and inspect it’s source code.

                                                      1. 1

                                                        With respect to coloring, you’re right but that’s more on the user than the script itself =/

                                                        With respect to -e and -o pipefail, the same situation happens in programming languaes where there is no catch. It hits the top level and errors out the program. bash has ways to catch/handle these top level errors via trap. I’ve recently been experimenting with it:

                                                        https://gist.github.com/twolfson/a9c53e711719037334de6df2ef93a9fc

                                                        With respect to cleanup, the same issue will arise no matter the language – if someone SIGINT’s a script, it’s unlikely to cleanup properly. To handle a SIGINT in bash, it can be done via trap (either for SIGINT itself or EXIT):

                                                        trap [-lp] [[arg] sigspec ...] command on https://linux.die.net/man/1/bash

                                                        Reference: I use trap to clean up provisioning scripts on ssh scripts which copy data to the remote host

                                                    2. 1

                                                      Sorry if this is a harsh reaction, obviously it’s neat to see all the colors and spinning doo-dads, but I think it’s a bad choice.

                                                      No problem at all, I appreciate your response! I must confess I was swept off my feet by the spinning doo-dads. :)

                                                    1. 2

                                                      For those wondering how this is done (because it’s looked super scary to me security-wise to allow JS to be run in CSS). It’s done via JS hidden in the “External JavaScript” section:

                                                      1. 2

                                                        It’s a neat thought but it’s yet another convention for people to learn/adapt to. In my opinion, the only convention that should be considered is keeping repositories consistent in an organization. Anything outside of that is outside of your control (maybe you can get language ecosystem level consistency but good luck).

                                                        Variations of this:

                                                        • Document workflows/scripts in README with links to docs/* for lengthier tasks
                                                        • bin/*, as per the article
                                                        • Makefile, as per @fzakaria
                                                        • fabfile.py (Fabric in Python)
                                                        • scripts in package.json (npm in Node.js)
                                                        • Rakefile (rake in Ruby)
                                                        1. 8

                                                          For those curious, git has similar built-ins but nothing that allows for staging/unstaging all in 1 command. The corresponding commands:

                                                          # Stage hunks in patch mode
                                                          # DEV: Patch mode is REPL to add/skip/navigate hunks
                                                          git add -p
                                                          
                                                          # Unstage hunks in patch mode
                                                          git reset -p
                                                          
                                                          # Commit hunks in patch mode (TIL this exists)
                                                          git commit -p
                                                          
                                                          # Bonus: Stash hunks in patch mode
                                                          git stash -p
                                                          
                                                          1. 4

                                                            I wouldn’t say core Git has similar built-ins. Sure, you can do interactive selection (and editing) of hunks to commit/stage/unstage/stash, but that is not Crecord’s innovation: Mercurial and Git both already had Commit this? [yes/no/etc]?-style interactive selection when crecord was created. Crecord’s innovation is its interface. Here, have all three interfaces (crecord, old-style-Mercurial, old-and-current-style Git) side-by-side.

                                                            NB: Please don’t interpret this as ‘nobody else has done this / done this first’. Others have mentioned Magit, f.ex. I merely say crecord is an improvement over core Git’s hunk selection interface. It was an improvement over core Mercurial’s hunk selection interface, too – so much so that it is now in core.

                                                            Crecord. Plain text doesn’t do it justice, go look at the screenshot.

                                                            # git crecord
                                                            # hg commit -i / --interactive
                                                            SELECT CHUNKS: (j/k/up/dn/PgUp/PgDn) move cursor; (space/A) toggle hunk/all; (e)dit hunk;
                                                             (f)old/unfold; (c)onfirm applied; (q)uit; (?) help | [X]=hunk applied **=folded, toggle [a]mend mode
                                                            
                                                            [x]    diff --git a/a b/myfile
                                                                   1 hunks, 2 lines changed
                                                            
                                                               [x]     @@ -0,0 +1,2 @@
                                                                  [x]  +fishfingers
                                                                  [x]  +custard
                                                            

                                                            Mercurial, prior to 3.4 (2015-05-01) – that’s when crecord was moved into core.

                                                            # hg commit -i / --interactive
                                                            diff --git a/myfile b/myfile
                                                            1 hunks, 2 lines changed
                                                            examine changes to 'myfile'? [Ynesfdaq?]
                                                            
                                                            @@ -0,0 +1,2 @@
                                                            +fishfingers
                                                            +custard
                                                            record this change to 'myfile'? [Ynesfdaq?]
                                                            

                                                            Git

                                                            # git commit -p / --patch
                                                            diff --git a/myfile b/myfile
                                                            index e69de29..de98044 100644
                                                            --- a/myfile
                                                            +++ b/myfile
                                                            @@ -0,0 +1,2 @@
                                                            +fishfingers
                                                            +custard
                                                            Stage this hunk [y,n,q,a,d,/,e,?]?
                                                            
                                                            1. 2

                                                              Yep. totally. The interface is much slicker than text based navigation. My comment was to inform people that might not know git had interactive modes for its commands

                                                            1. 2

                                                              With respect to image signatures/hashes (as others have mentioned), ImageMagick offers this via its identify command:

                                                              https://www.imagemagick.org/script/identify.php

                                                              Here we display the image texture features, moments, perceptual hash, and the number of unique colors in the image: identify -verbose -features 1 -moments -unique image.png

                                                              I’ve been using it only for strict comparisons though so I’m not sure how it will handle resizes/minor differences =/

                                                              Also, for reference I’m on an older version of ImageMagick and was in a performance tight spot so I had to use the -format variant:

                                                              identify -format '%#' image.png
                                                              # abcdefghash
                                                              

                                                              https://www.imagemagick.org/script/escape.php

                                                              %# CALCULATED: ‘signature’ hash of image values

                                                              1. 8

                                                                Something related I learned about a couple weeks ago, bash supports parallel execution via forking and wait. For example:

                                                                # Move multiple files in parallel
                                                                for filepath in *; do
                                                                  mv "$filepath" "$filepath.bak" &
                                                                done
                                                                
                                                                # Wait for child processes to complete
                                                                wait
                                                                
                                                                # More code goes here
                                                                

                                                                This is practical for when you don’t want your coworkers to need to install an external dependency. Unfortunately, it’s not great for if you want to preserve output order (e.g. in my case, I was getting image signatures). It could be hacked further to use bash arrays but in the end parallel was an easier solution

                                                                1. 1

                                                                  Interesting! Could this possibly be used then to port Pure to bash?

                                                                  1. 2

                                                                    It already has been? https://github.com/sindresorhus/pure#ports

                                                                    Looking at how pure works really quickly, its using zpty and file descriptors to do async communication and not fork/wait. So the simple answer would be no. This is also really basic job control in shells. You can get much more complicated. Note there is no checking of the return code vi using $! etc…

                                                                    If you really wanted to you could use $! to wait on specific background pids. If you wanted to preserve output order you could just keep an array of the pids you backgrounded and save their outputs to either a tempfile or variable and then output that at the end. At least thats the more old school way of doing it.

                                                                1. 2

                                                                  I’d like to append to the “Generally favoring parens” argument, specifically the “it makes the code look like the other languages I’ve spent years learning to read”. In addition to this, it’s good not only for me but for:

                                                                  • My entire team (if it takes 2 weeks of unproductive time for a developer to learn/adjust, that adds up quickly over lots of developers)
                                                                  • Newcomers to the team, same argument as entire team
                                                                  • Outside contributors, same argument as entire team (oh hai open source)
                                                                  • People who might write multiple languages in a day (e.g. someone typically works with JS in the browser and needs to write some server code without having to worry about style)

                                                                  Of course, a counter argument here is that if anyone wants to write open source, the greater Ruby community doesn’t use parens (from what I’ve seen) so the developer would have to learn the style anyway (in a repo out of their control maybe with less tooling to help them out =/)

                                                                  1. 2

                                                                    The initial part of the site is really bad advice. Dismissing utility classes upfront due to lack of knowing meaning is naive; every code base has a learning curve and making developers write a grid class for each component is not maintainable due to its time cost.

                                                                    In my experience, a module/component heavy system with opt-in utility overrides has been the most practical and maintainable. For example:

                                                                    • Grid system as classes
                                                                    • Components as their own set of classes (e.g. tabs, modals)
                                                                    • Utilities as one-off overrides (e.g. margin-left by 1 spacing unit)

                                                                    Some things I do agree with:

                                                                    • Components are a good idea
                                                                    • An excess of utility classes is frustrating to maintain because there’s so damn many

                                                                    Some flaws I see in the examples provided by the site:

                                                                    • Anything where you directly reference a color or content is going to blow up in your face unless it’s custom to a custom site style
                                                                      • This prevents reuse across multiple sites (which should be the goal for most companies as designers want to reuse components in designs to save time too)
                                                                      • Instead, it’s best to use schemes like “primary”, “success”, “error”, “dark”, “light”, etc. These will tolerate maintenance changes down the line

                                                                    For those curious, I use typically Inuit.css@5.0.1 (6.0.0 exploded itself into little packages which it looks like it’s started to undo) with BEM and OOCSS mindsets: