1. 4

    On macOS, brew cask install docker will install Docker for Mac automatically.

    1. 1

      Also there’s appcast.xml, which is used for auto-update, that contains URL of latest version.

    1. 2

      I still wish there was something equivalent to this on *nix. I mean, urxvt is fine and all, but…

      1. 1

        Alacritty is what you’re looking for: https://github.com/jwilm/alacritty

        1. 2

          Or if you want something more mature and featureful, less Rusty bleeding-edge, try Kitty: https://github.com/kovidgoyal/kitty

      1. 0

        I’m very happy with vgo after spending years frustrated with buggy, partially-broken third party tools: first glide (no way to upgrade just one package, randomly fails operations) then dep (100+ comment issue on not supporting private repos).

        This comment from HN sums up my feelings on this post:

        Go does not exist to raise the profiles of Sam Boyer and Peter Bourgon. Sam wanted to be a Big Man On Campus in the Go community and had to learn the hard way what the D in BDFL means. The state of dep is the same as it was before - an optional tool you might use or might not.

        Lots of mentions in Peter’s post about things the “dep committee” may or may not have agreed with. Isn’t this the same appeal to authority he is throwing at Russ? When did the “dep committee” become the gatekeepers of Go dependency discussions and solutions? Looks like a self-elected shadow government, except it didn’t have a “commit bit”. Someone should have burst their balloon earlier, that is the only fault here. Russ, you are at fault for leading these people on.

        Go is better off with Russ’s module work and I personally don’t care if Sam and Peter are disgruntled.

        1. 14

          This is an extremely bad faith interpretation of events. Your words have an actual negative effect on people who have tried for a very long time to do the best they could to improve a bad situation.

          1. 9

            had to learn the hard way what the D in BDFL means

            Except Go is not (or at least doesn’t market itself as) BDFL-led. The core team has been talking about building and empowering the community for years (at least since Gophercon 2015, with Russ’ talk).

            When did the “dep committee” become the gatekeepers of Go dependency discussions and solutions?

            They were put in place by / with the blessing of the Go core team, so some authority on the subject was certainly implied.

            Go is better off with Russ’s module work

            You can certainly prefer Russ’s technical solution, that’s only part of the thing being discussed (and I think it’s fair to say it’s not the heart of the matter).

            The rest of your quotes are just mean.

            1. -4

              People don’t seem to realize that Go is not driven by the community, it’s driven by Google. It’s clear to me that Google doesn’t trust its programmers to use any advanced features, the code is formatted the same (again, don’t trust the programmer), everything is kept in one single repo and there is no versioning [1]. In my opinion, Google only released Go to convince a ton of programmers it’s the New Hotness (TM), get everybody using it so they can cut down on training costs and disappointed engineers looking for resume-worthy tech to work on [2].

              So, any proposal for Go that violates Google’s work flow will be rejected [3]. Any proposal that is neutral or even helps Google, will probably be accepted. As far as I’m concerned, Go is a Google-proprietary language to solves problems Google has. The fact that it is available for others to use is intentional on Googles part, but in no way is it “communitty driven.”

              [1] Because if you change the signature of a function, it is up to you to change all the call sites at the same time. Because the code is all formatted the same way, there does exist tooling to do this. At Google. Probably no where else.

              [2] “What do you mean we got to use this proprietary crap language? I can’t put this on my resume! My skills will stagnate here! I hate you, Google!”

              [3] Stonewalled. Politely told no.. But ultimately, it will be rejected.

              1. 4

                To be fair, don’t trust the programmer, is a pretty good rule to follow when you design a language or API. Not because programmers are bad or incompetent but because they are human and thus predisposed to make mistakes over time.

            2. 5

              hrm, I actually want to push back against this quite strongly. any BDFL making decisions in the absence of community input will quickly find themselves the BDFL of a project that has no users, or at least one that often makes poor technical choices. Also, framing this disagreement as a personal one where prestige and reputation are at stake rather than as a technical one is a characterization that nobody other than the involved parties can make, certainly not people uninvolved in the project at all. In particular, making character judgements about people you don’t know based on technical blog posts is something I expect from the orange website, but I’d like to think the community here is a bit better.

              and as far as that technical disagreement goes, I’ve read through rsc’s rationale and I’m not any more convinced than I was in the beginning that jettisoning a well known package management path (SAT-solver) in favor of a bespoke solution is the correct decision. It is definitely the Golang thing to do, but I don’t know if it’s the best. Time will tell.

            1. 4

              I agree that make is too freaking hard. it’s a terrible tool and you don’t have to use it. It took me years to realize this. I deleted the makefiles from my projects. I no longer use makefiles.

              1. 4

                Yup. I should also write a blog post on “invoking the compiler via a shell script”.

                The main thing to know is that the .c source files and -l flags are order dependent. With a makefile, most people use separate processes to compile and link so I think it doesn’t come up as much.

                1. 4

                  I don’t use Make as a build tool, but I find it quite handy to collect small scripts and snippets with PHONY targets that don’t attempt any dependency tracking. Make is almost universally available, the simple constructs I use are portable between gmake and BSD make, and almost every higher-level tool out there understands Makefile — so coworkers using various IDEs and the command lin can all discover and run the “build”, “this test”, “download dependencies”, “run import process”, “lint”, etc tasks. If I need a task that’s more than two lines, I put it in a shell script.

                  Although some languages now come with tooling that understands scripts, such as Cargo or NPM, I still find a Makefile useful for polyglot projects or when it’s necessary to modify the environment before calling down to that language specific tooling.

                  1. 4

                    Yes, I want to write about this too! You are using Make like a shell script :)

                    I use this pattern in shell:

                    # run.sh
                    build() {
                      ...
                    }
                    test() {
                      ...
                    }
                    "$@"
                    

                    Then I invoke with

                    $ run.sh build
                    ...
                    $ run.sh test
                    

                    I admit that Make has a benefit in that the targets are auto-completed on most distros. But I wrote my own little auto-complete that does this. I like the simplicity of shell vs. make, and the syntax highlighting in the editor.

                    When I need dependency tracking, I simply invoke make from the shell script! Processes compose.

                    You’ll see this in thousands of lines of shell scripts (that I wrote) in the repo:

                    https://github.com/oilshell/oil

                  2. 1

                    About two years ago I finally sat down and read the GNUMake manual. It’s very readable, and it’s more capable than just about any other make out there. For one project, the core of the Makefile is:

                    %.a :
                        $(AR) $(ARFLAGS) $@ $?
                    
                    libIMG/libIMG.a     : $(patsubst %.c,%.o,$(wildcard libIMG/*.c))
                    libXPA/src/libxpa.a : $(patsubst %.c,%.o,$(wildcard libXPA/src/*.c))
                    libStyle/libStyle.a : $(patsubst %.c,%.o,$(wildcard libStyle/*.c))
                    libWWW/libWWW.a     : $(patsubst %.c,%.o,$(wildcard libWWW/*.c))
                    viola/viola         : $(patsubst %.c,%.o,$(wildcard viola/*.c))     \
                                libIMG/libIMG.a         \
                                libXPA/src/libxpa.a     \
                                libStyle/libStyle.a     \
                                libWWW/libWWW.a
                    

                    The rest of it is defining the compiler and linker flags (CC, CFLAGS, LDFLAGS, LDLIBS) and some other targets (clean, depend (one command line to generate the dependencies), install, etc). And this builds a program that is 150,000 lines of code. I can even do a make -j to do a parallel build. I’m not entirely sure where all this make hate comes from.

                    1. 2

                      I’ve read the GNU make manual (some parts multiple times) and written 3 significant makefiles from scratch. One of them is here:

                      https://github.com/oilshell/oil/blob/master/Makefile (note that it includes .mk fragments)

                      It basically works but I’m sure that there are some bugs in the incremental and parallel builds. I have to make clean sometimes and I’m not brave enough do parallel builds. How would I track these bugs down? I have no idea. I tried but I kept breaking other things, and I got no feedback about this.

                      In other words, it’s extraordinarily difficult to know whether your incremental build is correct, and whether your parallel build is correct. Make offers you no help there essentially.

                      There are a lot of other criticisms out there, but if you scroll down here you’ll see mine:

                      http://www.oilshell.org/blog/2017/05/31.html

                      (correctness, gcc -M, wrong defaults for .SECONDARY, etc.)

                      There is also a debugging incantation I use that I had to figure out with some hard experience. Basically I disable the builtin rules database and enable verbose mode.

                      Another criticism is that the builtin rules database can make builds significantly slower.

                      I’m not using Make for a simple problem, but most build problems are not simple! It is rare that you just want to build a few C files in a portable fashion. For that, it’s fine. But most systems these days are much more complex than that. Multiple languages and multiple OSes lead to an explosion and complexity, but the build system is the right place to handle those problems.

                      1. 2

                        I somehow seem to miss these “complex builds that break Make.” I have a project that uses C, C++ and Lua in a single executable and make handled it fine (and that includes compiling the Lua code into Lua bytecode, then transforming that into a C file which is then compiled into an object file for final inclusion in the executable).

                        I don’t know. For as bad as make is made out to be, I’ve found the other supposed solutions to be worse.

                  1. 1

                    Am I the only one who sees irony in the author putting all this thought into a problem that could be summarized as people simply being bad at their job?

                    1. 3

                      What a reductive insight you’ve made: “People writing bad software is the root of bad software”.

                      1. 3

                        No. My insight is this: all of his anecdotal examples are examples of people poorly managing time due to poorly prioritizing their work. This fault has many causes such as poor design goals, miscommunication, avoidant behavior, boredom, et al. This is a fault everyone is aware of. The author has recognized this problem and decides to spend time formalizing it, making anecdotal stories, accompanying graphics, and a blog post, when that time could be spent on some aspect of enriching his life or self improvement (unless he finds writing the post enriching which is entirely fine.)

                        To the extend of the author’s examples, these workers sound bad enough to warrant labeling them as either incompetent or just lazy. In which case, we should just call a spade a spade and force this worker to make a course correction or replace them with someone better. That would be the simple and most straightforward response to the problem, and I am simply pointing out the irony of the author’s response.

                        1. 6

                          I read the article as an extended meditation on the Upton Sinclair quote, “It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”

                          A story. Years ago, when I was in college, I did some consulting work at a company. My coworker was also a college student (same college, same department). One project was editing a printed manual to put up on an internal Website. The conversion from Microsoft Word to HTML was trivial (Microsoft word provided that much). But not linking each vocabulary word to its definition in the glossary section. There were perhaps a hundred words (it was a specialized industry) and there were some 100 pages to edit.

                          My coworker wanted to dive right in and do the editing by hand. That was a lot of work. Days worth of mind-numbing work. I wanted to take some time to think about the task and how best to approach it. An argument ensued. We ended up using lex (since we had access to some Unix workstations with everything preloaded) to add the appropriate HTML links to the glossary page for each web page and were done in maybe two hours or so.

                          Was I smart in finding a quick way to reliably edit the 100 pages? Or was my coworker smart by trying to get X extra days of work even if it was drudgery? [1]

                          (Yes, I see the relationship of my story to the article, but I’m having a hard time articulating the connection I see. It’s similar to doing badly at ops—the ops who “save the day” with heroics in getting the system back up get the kudos, while the ops who set things up to run smoothly with almost no down-time get laid off because they appear to be doing nothing. Incentives matter.)

                          [1] And it wasn’t like lex was an unknown tool—my coworker was a grad student in the CS department and had written a compiler using both lex and yacc.

                          1. 1

                            It depends on the point of view. If you were doing it because you honestly thought it was the best solution to the problem at hand, well then you were right, and hopefully your coworker learned a lesson and this isn’t an imaginary problem. If you were doing it for yourself, for your own knowledge, believing that the the skills learned would pay off at a later date, well then that’s up to you to decide but not an imaginary problem.

                            The second scenario seems falsely conflated with the first. That is a problem, but a different one of management incorrectly valuing their employees.

                            I would say the act of spending the effort to connect these dots is an imaginary problem. A lot of these characteristics are human nature, which no amount of writing or philosophizing will fix (if the goal is to “fix” the problem).

                            I understand the meditation on the topic, perhaps in an attempt to clarify to himself a series of problems that his intuition tells him has some tangible connection, which is beneficial for one’s own peace of mind. And to be clear, I am not criticizing the post. I just thought the meta-connection between the post itself and its content was amusing. Contemplating the “metaness” of things is my personal imaginary problem pit.

                            Edit: A larger point I want to make is that these ARE very complex problems. Knowing which solution is going to be optimal is something that takes either lots of research or experience and intuition. Of course people will choose wrong occasionally and make the problem even worse. Hopefully they learn and make better choices next time. Trying to paper over this experiential process by making it a “problem” (in the first examples in the article) is foolish and can send a message that any mistakes made are self inflicted instead of being part of the process of self-development.

                            1. 1

                              In your scenario, you saved time and money. Real problem.

                              If the task had instead been to change one web page’s header from h1 to h2 and you had to go out and buy a new sparcstation and compile Perl from scratch, etc., that’s imaginary problem territory.

                              Imaginary problems seem to be justified by “but what if?” What do the kids say? YAGNI. Perhaps the question can’t be fully resolved until afterwards, in hindsight, but we can at least keep track of which developers seem to correctly identify what ifs. I’d bet some people are prone to under building and some to over building. (And also never learning from those experiences, always erring in the same direction.)

                      1. 1

                        I’m always looking for a cross-compiling system for building macOS executables from Linux, either as a single static executable, or as a self-contained relocatable bundle of (interpreter + libraries + user code entrypoint), because getting legal Mac build workers is such a pain.

                        The best toolkit I’ve found, by far, is golang Where you just GOOS=darwin go build .... There are a variety of more-or-less hacky solutions in the Javascript ecosystem, and a few projects for Python, but for Ruby this area is sorely lacking.

                        I mention this because while XAR looks like an awesome way to distribute software bundles, I still need to figure out a way to do nice cross-compiles if I’m going to use it to realistically target both macOS and Linux.

                        1. 3

                          Tell me about it. I’ve tried cross compiling Rust from Linux to OSX and it was just a saga of hurt from start to finish.

                          For Go, did you need to jump through the hoops of downloading an out-of-date Xcode image, extracting the appropriate files and compiling a cross-linker? Or is that mysteriously handled for you by the Go distribution itself?

                          1. 2

                            You literally just run GOOS=<your target os> GOARCH=<your target architecture> go build. No setup needed. Here’s the vars go build inspects.

                            It’s frustrating trying to do similar in compiles languages, and then interpreted languages with native modules are even worse.

                            1. 1

                              Go basically DIYs the whole toolchain and directly produces binaries. That has pros and cons, but means it can cross-compile without needing any third-party stuff like the Xcode images. For example it does its own linking, so it doesn’t need the Xcode / LLVM linker to be installed for cross-compilation to Mac.

                            2. 1

                              AFAICT, XAR still doesn’t include the Python interpreter, so it’s not completely independent?

                              1. 1

                                No reason you can’t put a whole virtualenv, python interpreter and all, into your XAR. XAR can pack anything.

                                You still need a tool to prepare that virtualenv so that you can pack it, and that’s the sort of tool I struggle to find - cross-compiling a venv, or equivalent in other languages.

                                1. 1

                                  I think most OSS work uses the Mac builders on Travis CI for building mac binaries.

                                  1. 1

                                    Yes, exactly. I am less interested in different formats and more in a tool to create them. The ease of doing that with Go is the target.

                                    1. 1

                                      The ease of doing that with Go is the target.

                                      By this you mean, you’re looking for a solution for Python packaging that makes it as easy as Go to distribute universally?

                                      I used this once before to take some code I wrote for Linux (simple cli with some libraries - click, fabric, etc.) and release it for Windows: http://www.py2exe.org/index.cgi/Tutorial

                                      The Windows users on my team used the .exe file and it actually worked. It was a while back but I remember that it was straightforward.

                              1. 3

                                Favorite line so far:

                                FATAL << "strdup failed, call the cops"

                                1. 1

                                  Here’s an example from this post:

                                  export default handleActions (
                                    {
                                      ADD_TODO: (state, action) => {
                                        return {
                                          ...state,
                                          currentTodo: '',
                                          todos: state.todos.concat (action.payload),
                                        };
                                      },
                                      LOAD_TODOS: (state, action) => {
                                        return {
                                          ...state,
                                          todos: action.payload,
                                        };
                                      },
                                      UPDATE_CURRENT: (state, action) => {
                                        return {
                                          ...state,
                                          currentTodo: action.payload,
                                        };
                                      },
                                      REPLACE_TODO: (state, action) => {
                                        return {
                                          ...state,
                                          todos: state.todos.map (
                                            t => (t.id === action.payload.id ? action.payload : t)
                                          ),
                                        };
                                      },
                                      REMOVE_TODO: (state, action) => {
                                        return {
                                          ...state,
                                          todos: state.todos.filter (t => t.id !== action.payload),
                                        };
                                      },
                                      [combineActions (SHOW_LOADER, HIDE_LOADER)]: (state, action) => {
                                        return {...state, isLoading: action.payload};
                                      },
                                    },
                                    initState
                                  );
                                  

                                  To me this and Redux in general both look like a half-baked reimplement of JavaScript’s class, except the method names are UPPER_CASE because they are “constant”, and state is managed elsewhere and passed as a parameter instead of being stored on this. Why reinvent this syntax? Is it just so we can be sure we’re using “functional programming” and have “no internal state”?

                                  If you want to eliminate boilerplate, cut the knot. Let me write my reducer as a class, and then generate the action creators, action names, etc from the class’s declared methods. You can still have all the benifits of redux while using a pleasing syntax.

                                  A sketch (pardon, on mobile):

                                  class TodoReducer extents Reducer {
                                    static initialState = () => ({ ... })
                                  
                                    addTodo(state, todo) {
                                      return {
                                        ...state,
                                        currentTodo: '',
                                        todos: state.todos.concat(todo),
                                      }
                                    }
                                  
                                    // more methods ...
                                  }
                                  
                                  // each camel-case methodName has a constant case METHOD_NAME.
                                  // Returns an object with each constant case method name mapped to itself. 
                                  export const actions = TodoReducer.actions()
                                  
                                  // creates an object mapping methodNames to functions that return an action object with type 
                                  // set to the constant case name of that method
                                  export const actionCreators = TodoReducer.actionCtreators
                                  
                                  // creates a reducer function from the class that handles dispatching redux actions as regular method calls on an instance of the class.
                                  // the wrapper function could also do hand-holding like assert there is no state being recorded in the instance
                                  // or, a new instance could be created for each dispatch, although the perf would suck
                                  export const reducer = TodoReducer.reducer()
                                  
                                  1. 1

                                    Personally I’d much rather avoid method-name magic, and I find splitting actions/constants/reducer into 3 flies in a module is an easy way to organise it, even if it is a fair chunk of boilerplate, and it all ends up seeming a lot simpler and less overblown than that example where it’s all done inline. Yes it could totally be more concise, but I really like explicit. Just my opinion, obvs.

                                    1. 1

                                      To me, the core benefits of redux are:

                                      1. All (important) state in one spot: simplifies reasoning and decision making
                                      2. Dependency injection: callers and consumers both have no knowledge of the state holder
                                      3. serializability: actions are simple objects that can be recorded, streamed across the network, replayed, ….

                                      What’s the purpose of the CONSTANTS file? How do you use those constants, other than to put them in the { type: } field of a serializable action, or to reason about an action in a reducer?

                                      How does moving the method names of a class into another file improve organization? What benefits do you see in an explicit composition from parts usually left implicit in other patterns?

                                      (As for the CONSTANT_NAME magic proposed in my example: sure, it’s immaterial. Trade following redux style convention for less magic.)

                                      1. 1

                                        I think those are the core benefits too :-) No purpose for such files other than to keep everything in one place, make it easily accessible, and know easily where I am with everything, and never have to remember name-conversion conventions or any of that stuff. Nothing more - but for my way of working it’s clearer and easier.

                                    2. 1

                                      One reason is that actions and reducers share a one to many relationship. You can decompose a single reducer into several reducers and potentially all of those reducers handle the same action. Actions are not meant to be remote function calls.

                                    1. 17

                                      Thanks for posting this, eta! sr.ht is still a work in progress, but things are definitely starting to fall into place. The whole thing is self hosted (it even deploys itself on the in-house CI) and it’s being used seriously for a few projects - we run all builds for the sway and wlroots projects through it, for example: https://builds.sr.ht/job/4878.

                                      Check out https://lists.sr.ht/~sircmpwn/sr.ht-announce for news, or shoot an email to ~sircmpwn/sr.ht-announce+subscribe@lists.sr.ht. If anyone from Lobsters wants an alpha account, shoot me an email - sir@cmpwn.com - would be happy to hook you up.

                                      1. 8

                                        I’m impressed that you not only develop end-user software (sway) and its libraries (wlroots), but you also build out the whole project lifecycle tools as well. How do you find the time to do everything? Do you have any productivity advice for others on managing and delivering so many different projects?

                                        1. 20

                                          Thank you for your kind words! It might be prudent to point out that neither sway, wlroots, nor sr.ht has seen a stable release to date, though. I’m not afraid to start a lot of ambitious projects, even if they’ll take years to complete, because the years will eventually pass and I’ll thank myself for not spending them worried I wouldn’t have had time to complete anything. At least how I hope it works, we’ll see if any of this shit ever gets done!

                                      1. 2

                                        I’ve investigated Archipel a few times when looking for VM solutions for my home server. It has a really appealing GUI, and for a while was sponsored by the maintainer’s employer. However, it hasn’t seen active development for a few years. At this point the only web presence is Cloudflare’s cache!

                                        Take a look at the Github issues to see what I mean:

                                        https://github.com/ArchipelProject/Archipel/issues/1205#issuecomment-390453860

                                        https://github.com/ArchipelProject/Archipel/issues/1182#issuecomment-252516526

                                        1. 1

                                          True, I will keep that in mind when looking for such tools.

                                          Can you recommend any other web management tools?

                                          1. 2

                                            Take a look at Proxmox, which is a web GUI over KVM virtualization and LXC “fat” containers: https://www.proxmox.com/en/proxmox-ve

                                            I never installed either of these systems, and eventually settled on FreeBSD + ZFS + ezjail for my isolation needs. VMs were overkill, and I use my workstation if I want to test out a different OS. I am considering a rebuild on Linux + ZFSOnLinux + Kubernetes now that ZFS on Linux is more mature, and I use Kubernetes at work.

                                            1. 1

                                              Ah. Yes Proxmox. I think I will settle on that. Thanks.

                                              It would be nice to see archipel go further but hey this is great!

                                        1. 2

                                          Exciting to see this progress in Crystal. On the subject of concurrency, I’ve seen several languages recently discuss the goal implementing “Golang” style M:N parallelism. I’d love to take a crack at implementing ConcurrentML for one of these languages based on this excellent post: https://wingolog.org/archives/2018/05/16/lightweight-concurrency-in-lua

                                          1. 7

                                            If you want to get a feel for an advanced shell with non-text streams, dive into Powershell. It runs on Linux these days, and it’s got a considerable ecosystem of snippets, articles, etc out there - more than Fish or the other alternative shells.

                                            You don’t have to dream about this stuff. It’s got that Microsoft flavor, but it’s available today.