1. 6

    A slightly related Go nit, the case of structure members determines whether they’re exported or not. It’s crazy, why not explicitly add a private keyword or something?

    1. 17

      why not explicitly add a private keyword or something?

      Because capitalization does the same thing with less ceremony. It’s not crazy. It’s just a design decision.

      1.  

        And limiting variable names to just “x”, “y” and “z” are also simpler and much less ceremony than typing out full variable names

        1.  

          I’m not sure how this relates. Is your claim that the loss of semantic information that comes with terse identifiers is comparable to the difference between type Foo struct and e.g. type public foo struct?

      2. 6

        This would be a more substantive comment chain if you can express why it’s crazy, not just calling it crazy. Why is it important that it should be a private keyword “or something”? In Go, the “or something” is literally the case sensitive member name…which is an explicit way of expressing whether it’s exported or not. How much more explicit can you get than a phenotypical designation? You can look at the member name and know then and there whether it’s exported. An implicit export would require the reader to look at the member name and at least one other source to figure out if it’s exported.

        1. 6

          It’s bad because changing the visibility of a member requires renaming it, which requires finding and updating every caller. This is an annoying manual task if your editor doesn’t do automatic refactoring, and it pollutes patches with many tiny one-character diffs.

          It reminds me of old versions of Fortran where variables that started with I, J, K L or M were automatically integers and the rest were real. 🙄

          1.  

            M-x lsp-rename

            I don’t think of those changes as patch pollution — I think of them as opportunities to see where something formerly private is now exposed. E.g. when a var was unexported I knew that my package controlled it, but if I export it now it is mutable outside my control — it is good to see that in the diff.

            1.  

              I guess I don’t consider changing the capitalization of a letter as renaming the variable

              1.  

                That’s not the point. The point is you have to edit every place that variable/function appears in the source.

                1.  

                  I was going to suggest that gofmt‘s pattern rewriting would help here but it seems you can’t limit it to a type (although gofmt -r 'oldname -> Oldname' works if the fieldname is unique enough.) Then I was going to suggest gorename which can limit to struct fields but apparently hasn’t been updated to work with modules. Apparently gopls is the new hotness but testing that, despite the “it’ll rename throughout a package”, when I tested it, specifying main.go:9:9 Oldname only fixed it (correctly!) in main.go, not the other files in the main package.

                  In summary, this is all a bit of a mess from the Go camp.

            2.  

              The author of the submitted article wrote a sequel article, Go’ing Insane Part Two: Partial Privacy. It includes a section Privacy via Capitalisation that details what they find frustrating about the feature.

            3.  

              A slightly related not-Go nit, the private keyword determines whether struct fields are exported or not. It’s crazy, why not just use the case of the field names saving everyone some keypresses?

              1.  

                I really appreciate it, and find myself missing it on every other language. To be honest, I have difficulty understanding why folding would want anything else.

              1. 4

                Totally pointless. Cmd + Ctrl + Space is the default key combo to bring up the character palette, you can then search for whatever character you want.

                1. 2

                  Oh, wow, I hadn’t realized the Character Viewer (the window opened by that shortcut) had gotten so much easier to use in more recent macOS versions – or did I never notice that top-right button that converts the floating palette into a popup near the cursor?

                  Opening the Character Viewer as a palette window keeps keyboard focus in the text field you are in, so it requires a lot of mouse usage to search for and to insert the character you want. I see that after toggling the Character Viewer to the popup mode (the default mode on macOS 10.15), the search field is focused after pressing ⌃⌘Space, the arrow keys select a character, and I can insert the selected character with Return. That’s much more convenient.

                  1. 2

                    On newer Macs (at least the legend for it, it’s prob available or configurable), it’s just straight up bound to pressing fn.

                    1. 2

                      Thanks for the comment. I am going to update the article with this information.

                      1. 2

                        Sounds good. Sorry if I came across as harsh.

                    1. 5

                      Admit it. If you browse around you will realize that the best documented projects you find never provide that docs generated directly from code.

                      Is this saying that you shouldn’t use Javadoc or pydoc or “cargo doc”, where the documentation is located in the source files? So, from the previous point, it’s essential that docs live in the same repo as the code, but not the same files as the code? Seems like a pretty extreme position relative to the justification.

                      1. 18

                        As a concrete example, Python’s official documentation is built using the Sphinx tool, and Sphinx supports extracting documentation from Python source files, but Python’s standard library documentation does not use it - the standard library does include docstrings, but they’re not the documentation displayed in the Standard Library Reference. Partially that’s because Python had standard library documentation before such automatic-documentation tools existed, but it’s also because the best way to organise a codebase is not necessarily the best way to explain it to a human.

                        As another example in the other direction: Rust libraries sometimes include dummy modules containing no code, just to have a place to put documentation that’s not strictly bound to the organisation of the code, since cargo doc can only generate documentation from code.

                        There’s definitely a place for documentation extracted from code, in manpage-style terse reference material, but good documentation is not just the concatenation of small documentation chunks.

                        1. 1

                          Ah, I was thinking of smaller libraries, where you can reasonably fit everything but the reference part of the documentation on one (possibly big) page. Agreed that docs-from-code tools aren’t appropriate for big projects, where you need many separate pages of non-reference docs.

                        2. 10

                          There’s definitely a place for documentation extracted from code, in manpage-style terse reference material, but good documentation is not just the concatenation of small documentation chunks.

                          Can’t agree enough with this. Just to attempt to paint the picture a bit more for people reading this and disagreeing. Make sure you are thinking about the complete and exhaustive definition of ‘docs’. Surely you can get the basic API or stdlib with method arity and expected types and such, but for howtos and walkthroughs and the whole gamut it’s going to take some effort. And that effort is going to take good old fashioned work by technical folks who also write well.

                          It’s taken me a long time to properly understand Go given that ‘the docs’ were for a long time just this and lacked any sort of tutorials or other guides. There’s been so much amazing improvement here and bravo to everyone who has contributed.

                          On a personal note, the Stripe docs are also a great example of this. I cannot possibly explain the amount of effort or care that goes into them. Having written a handful of them myself, it’s very much “a lot of effort went into making this effortless” sort of work.

                          1. 8

                            Yeah I hard disagree with that. The elixir ecosystem has amazing docs and docs are colocated with source by default for all projects, and use the same documentation system as the language.

                            1. 2

                              Relevant links:

                            2. 5

                              The entire D standard library documentation is generated from source code. Unittests are automatically included as examples. It’s searchable, cross-linked and generally nice to use. So yeah, I think this is just an instance of having seen too many bad examples of code-based docs and not enough good ones.

                              When documentation is extracted from code in a language where that is supported well, it doesn’t look like “documentation extracted from code”, it just looks like documentation.

                              1. 4

                                Check out Four Kinds of Documentation. Generated documentation from code comments is great for reference docs, but usually isn’t a great way to put together tutorials or explain broader concepts.

                                It’s not that documentation generation is bad, just that it’s insufficient.

                                1. 2

                                  Maybe the author is thinking about documentation which has no real input from the developer. Like an automated list of functions and arguments needed with no other contextual text.

                                1. 3

                                  A corrected link to the announcement post: Announcing the Wheel Reinvention Jam!

                                    1. 16

                                      Well now, I had to do a double take after blindly opening Lobsters and seeing my own blog post on the front page!

                                      Hopefully others can get some use out of this feature since I find it pretty nifty :)

                                      1. 3

                                        Is there any way to use environment variables in the condition? I keep my global git config in a git repo and I’d love to have a mechanism for conditionally including machine-specific overrides to some of the settings.

                                        1. 5

                                          It doesn’t look like Git’s conditional includes feature supports reading environment variables – it only supports these keywords:

                                          • gitdir
                                          • gitdir/i
                                          • onbranch

                                          However, the Environment section of the configuration docs lists some environment variables that you could set on specific machines to change Git’s configuration.

                                          Per-machine configuration using GIT_CONFIG_GLOBAL

                                          Of those environment variables, GIT_CONFIG_GLOBAL seems the easiest to use. You could use it by putting these files in your config repo:

                                          • shared_config
                                          • config_for_machine_1
                                          • config_for_machine_2

                                          Within each machine-specific config, use a regular (non-conditional) include to include shared_config:

                                          [include]
                                          	path = shared_config
                                          ; Then write machine-specific configuration:
                                          [user]
                                          	email = custom_email_for_this_machine@example.com
                                          

                                          Finally, on each of your machines, set the environment variable GIT_CONFIG_GLOBAL to that machine’s config file within your config repo.

                                          Setting a default config for other machines

                                          If you want some machines to just use shared_config without further configuration, name that file config instead and make sure your config repo is located at ~/.config/git/. On those machines, you don’t need to set GIT_CONFIG_GLOBAL. This will work because $XDG_CONFIG_HOME/git/config is one of Git’s default config paths.

                                          1. 1

                                            Hmm, not that I know of off the top of my head but I’ve never actually sat down and read the Git documentation so I’d be surprised. You could perhaps look into templating your gitconfig using something like chezmoi? There’s always nix which comes up too but that’s quite a bit overkill just for fiddling with some dotfiles of course.

                                            1. 1

                                              I can work around it now by generating the file locally, it’s just been a mild source of annoyance for me that I need external support for this.

                                          2. 1

                                            Ah, a note for anyone trying this. The original version was missing a trailing slash on the end of the includeIf directive. I’ve just pushed a fix for the typo but if you copied it earlier and were having trouble, just a heads up.

                                          1. 14

                                            An interesting counter-example: batch compilers rarely evolve to IDEs and are typically re-written. Examples:

                                            • Roslyn was a re-write of the original C# compiler
                                            • visual studio I think uses different compilers for building/intellisense for C++
                                            • Dart used to have separate compilers for Dart analyzer and Dart runtime
                                            • Rust is in a similar situation with rustc&rust-analyzer

                                            Counter examples:

                                            • clangd (c++) and Merlin (ocaml) are evolutions of batch compilers. My hypothesis is that for languages with forward declarations & header files you actually can more or less re-use batch compiler.

                                            Non-counter examples:

                                            • Kotlin and TypeScipt started IDE-first.

                                            If I try to generalize from this observation, I get the following. Large systems are organized according to a particular “core architecture” — an informal notion about the data that the system deals with, and specific flows and transformation of the data. This core architecture is reified by a big pile of code which gets written over a long time.

                                            You may find that the code works bad (bugs, adding a feature takes ages, some things seem impossible to do, etc) for two different reasons:

                                            • either the code is just bad
                                            • or the core architecture is wrong

                                            The first case is usually amenable to incremental refactoring (triage issues, add tests, loop { de-abstract, tease-apart, deduplicate }). The second case I think often necessitates a rewrite. The rewrite ideally should be able to re-use components between to systems, but, sadly, the nature of core architecture is that its assumptions permeate all components.

                                            For compiler, you typically start with “static world, compilation unit-at-a-time, dependencies are pre-compiled, output is a single artifact, primary metric is throughput” (zig is different a bit I believe ;) ), but for ide you want “dynamic, changing world, all CUs together, deps are analyzed on-demand, bits of output are queried on demand, primary metric is latency”.

                                            It does seem that “bad code” is a much more common for grief than “ill-fit core architecture” though.

                                            1. 4

                                              As I heard it, Clang started out because Apple’s Xcode team had reached the limits of being able to use GCC in an IDE, and they wanted a new C/C++ compiler that was more amenable to their needs. (They couldn’t have brought any of GCC’s code into the Xcode process because of GPL contagion.) So while Clang may run as a process, the code behind it (all that LLVM stuff) can be used in-process by an IDE for interactive use.

                                              1. 2

                                                What do you mean by “GPL contagion”?

                                                1. 1

                                                  The GPL is a viral license.

                                                  1. 1

                                                    Oh wow, yikes. Thanks.

                                                  2. 1

                                                    That if they had linked or loaded any part of GCC into Xcode, the license would have infected their own code and they would have had to release Xcode under the GPL.

                                                  3. 2

                                                    It’s interesting that the clang tooling has evolved in a direction that would avoid these problems even if clang were GPL’d. The libclang interfaces are linked directly to clang’s AST and so when you link against libclang you pull in a load of clang and LLVM libraries and must comply with their license. In contrast, clangd is a separate process and talks via a well-documented interface to the IDE and so even if it were AGPLv3, it would have no impact on the license of XCode.

                                                  4. 3

                                                    Thanks for the counter examples, those are really interesting!

                                                    I’ve been able to make changes to the core architecture of my engine incrementally on a few occasions. Some examples:

                                                    • I started out with a 2D renderer, but replaced it with a 3D renderer
                                                      • (adding a dimensions sounds easy in theory, but the way 3d renderers are deisgned is very different from how 2d renderers are designed! lots of data structures had to change and tradeoffs had to be adjusted.)
                                                    • I started out with hard coded character motion, but transitioned to a physics engine
                                                    • I started out with a flat entity hierarchy, and transitioned to an tree structure
                                                    • I started out with bespoke entities, and transitioned to an entity system

                                                    These were definitely challenging transitions to make as my existing code base had a lot of hidden assumptions about things working the way they originally did, so to make it easier I broke the transitions into steps. I’m not sure I was 100% disciplined about this every time, but this was roughly my approach:

                                                    1. Consider what I would have originally built if I was planning on eventually making this transition
                                                    2. Transition my current implementation to that
                                                    3. Make the smallest transition possible from that to something that just barely starts to satisfy the constraints of the new architecture I’m trying to adopt
                                                    4. Actually polish the new thing/get it to the state I want it to be in

                                                    It would be interesting to see retrospectives on projects where people concluded that this approach wasn’t possible or worthwhile and why. There could be more subtitles I’m not currently identifying that differentiate the above transitions from, e.g., what motivated Roslyn.

                                                    1. 2

                                                      This is super interesting, do you know of any materials on how to design an IDE first compiler?

                                                      1. 3

                                                        The canonical video is https://channel9.msdn.com/Blogs/Seth-Juarez/Anders-Hejlsberg-on-Modern-Compiler-Construction, but, IIRC, it doesn’t actually discuss how you’d do it.

                                                        This post of mine (and a bunch of links in the end) is probably the best starting point to learn about overall architecture specifics:

                                                        https://rust-analyzer.github.io/blog/2020/07/20/three-architectures-for-responsive-ide.html

                                                      2. 1

                                                        Did TypeScript start out “IDE-first”? I remember the tsc batch compiler being introduced at the same time as IDE support.

                                                      1. 5

                                                        The solution is widespread legislation that makes using people’s personal data for targeted advertising illegal or very expensive. (This is not limited to Gemini. A great many influential Internet people are convinced politics is utterly broken, so “technical solutions” are all that’s left).

                                                        I don’t disagree with this, but I don’t really see Gemini as a “solution” to any political problem, nor do I have any reason to believe it was conceived as such. Rather, it is a space in which one can choose to go to opt out of the modern web – it’s a subculture, a niche, not trying to take over anything. Using Gemini in 202X is very different from using the web / gopher in 199X, the social and technological conditions are completely different. To use Gemini is to consciously reject the modern web and its trajectory – there is no money in it, no power, no real reason to use it except out of curiosity and interest. So much of social media is about metrics, engagement, advancing your career, etc – here is a space where you can explicitly reject that.

                                                        https://alex.flounder.online/gemlog/2021-01-08-useless.gmi

                                                        Gemini’s killer feature is that its extreme simplicity means that you can do these things with complete independence, knowing that every piece of software (client, server) is free software, easily replaceable, and created to serve the interests of the community, with no ulterior profit motive. 1 person working alone could write a basic client and/or server in a weekend, which means that production of the software ecosystem doesn’t have to be centralized. Again, I want to clarify – this isn’t to say this is the only way that software should be written, but it is allowing a space to exist that is genuinely novel and interesting.

                                                        Many Gemini users are CS students in college, who are young enough to not directly experience the web as it was before “web 2.0”. Gemini is not a return to web 1.0, but a revitalization of something that was lost in the web 1->2 transition.

                                                        proportionally even more white dudes.

                                                        I run https://flounder.online (gemini://flounder.online), and I haven’t done a demographic survey, but from reading people’s self-descriptions on their pages, I have no reason to believe it is less diverse than tech spaces like this forum, GitHub, etc.

                                                        1. 7

                                                          I came here to write this. I wrote Gemini off for many of the reasons @gerikson did at first, but after actually using it, I came to realise the value wasn’t necessarily in the protocol or markup tradeoffs (which I have mixed feelings about, as a user and implementer), but in the subculture that’s developed there. I use Gemini every day, for a few different reasons, and what’s there is lovely to me.

                                                          1. 2

                                                            Note that the article used “proportionally even more white dudes” to describe “the halcyon days of the Internet” as compared to “today’s internet”. It wasn’t saying the Gemini community has proportionally more white dudes.

                                                            1. 2

                                                              My bad, I slightly misread that paragraph.

                                                          1. 2

                                                            The ‘BTDT’ in the title stands for Been There, Done That. (Took me a minute.)

                                                            1. 11

                                                              The recommendation in the article’s conclusion’s depends on a certain assumption, but I’d like to note that this assumption may not be necessary. The conclusion:

                                                              If you want to use an online password manager, I would recommend using the one already built into your browser.

                                                              Why would you want to use an online password manager – that is, one built into your browser? By using a non-browser-based password manager, you could gain password management features such as storage of non-website passwords and storage of free-form notes with each entry while sacrificing very little convenience.

                                                              The article’s introduction mentions some offline password managers such as KeePass, KeePassX, and pass. On macOS, I personally prefer KeePassXC, a successor to KeePassX.

                                                              With KeePassXC, my password is not auto-filled when I visit a login page. (Perhaps KeePassXC’s browser extension has this feature, but I avoided installing it due to the principle of least privilege.) However, I can still use KeePassXC’s “auto-type” feature to simulate keyboard entry of my username and password in one go. I can also copy the username and password to the clipboard individually for pasting. The one downside to these methods is that the password manager will not warn me if I am trying to type the password into a phishing site – I have to be sure to first visit the site through a trusted bookmark or the link in the password entry.

                                                              Note that that “non-browser-based” doesn’t mean you will be forced to rely on one device to look up your passwords. You can use an online file sync service – a proprietary one like Dropbox or Google Drive or an open-source one like Syncthing or ownCloud – to make your password database available on multiple devices. I use this strategy to access my KeePassXC password database on both my laptop and my phone.

                                                              As your password database is encrypted at rest, online syncing requires only trusting your file sync service to not leak your files to anyone who would spend time brute-forcing your password. I find that trust easier to give than trusting a browser-based password management company to both not leak my encrypted password to their many attackers and to not serve me a version of the software with encryption disabled.

                                                              If you use password manager to share credentials among multiple users, you could still use a non-browser-based password manager plus a file sync service, but it’s less suited for that use-case. If multiple users add a password to the database at the same time, one of the users will have to manually resolve the conflict.

                                                              1. 2

                                                                This sounds like a decent middle ground between comfort and security. You might also consider hosting your password manager yourself. Bitwarden, which I use, is open source and has multiple server implementations. And because of the way bitwardens client - server communication protocol works, I don’t have to trust my hosting provider to not read my data.

                                                              1. 2

                                                                Warning: the title is somewhat misleading. This post describes a single example of a leaky abstraction: on Windows, cutting and pasting files from a ZIP file is much slower than copying and pasting those files. The post goes into detail about why Windows’s implementation of this ZIP file operation might be slow. However, the post does not discuss leaky abstractions in general.

                                                                1. 5

                                                                  Has anyone else seen that first screenshot before?!

                                                                  On the one hand, it’s utterly obnoxious behavior from Apple.

                                                                  On the other hand, it’s a pretty niche corner-case to justify “throw it all away and start afresh.”

                                                                  I’m also now noticing the irony of company A which uses its status to get people to do things (register) commenting on company B which uses its status to get people to do things (delete their own files). The desktop metaphor doesn’t feel like the biggest problem in this picture.

                                                                  1. 4

                                                                    It’s very easy to fix that — go into System Preferences, click General, and turn on the setting for reopening documents. (I’m not saying this is obvious, or that it’s the right UX, just that a solution exists.)

                                                                    1. 2

                                                                      Specifically, these are the related settings in System Preference > General (in macOS 10.14):

                                                                      • [ ] Ask to keep changes when closing documents
                                                                      • [ ] Close windows when quitting an app
                                                                        • When selected, open documents and windows will not be restored when you re-open an app.
                                                                  1. 2

                                                                    The feature this person is looking for is built into the OS.

                                                                    To minimize the current window, press Command-M. To minimize all windows of the app in focus, press Command-Option-M. - https://superuser.com/questions/36504/minimize-all-in-mac-os

                                                                    As for the other things, people can’t even use a file system properly. They don’t go back and clear things up. Or if they do, they sometimes remove the wrong things. It might be nicer for a power user but the suggestions proposed just make files more complicated than they are now. It’s a review changes dialog with more steps.

                                                                    1. 3

                                                                      If they’re not going to use the Preview app in their screen-share, an easier technique would be to choose Hide Preview (⌘H) from the application menu, which hides all the app’s windows until the application regains focus. That technique doesn’t require clicking 88 minimized windows to expand them again afterwards.

                                                                      Of course, as snej wrote, the real solution is to toggle a setting in System Preferences to make their preferred behavior the default by not showing that dialog when they quit.

                                                                    1. 1

                                                                      The useSWR section is missing a link to the library: SWR

                                                                      1. 1

                                                                        “Frum” from Yiddish or something else?

                                                                        1. 1

                                                                          Frum’s README has a link at the bottom to fnm, whose name stands for Fast Node Manager. Thus, I would assume the name Frum was made by changing “n” for Node into “ru” for Ruby.

                                                                          1. 1

                                                                            So, “fnm” is Yiddish, got it. :)

                                                                        1. 2

                                                                          I never understood ranting on other tools or languages. If you don’t like it don’t use it. If you think there’s a better tool or language for whatever it is you’re doing, use the better tool.

                                                                          1. 5

                                                                            If you think there’s a better tool or language for whatever it is you’re doing, use the better tool.

                                                                            That’s not always your choice. I got stuck for a few months writing golang at a job where I was originally hired to write Clojure, and that’s probably the rantiest I’ve ever been. I felt like I had to program with oven mitts on just to keep my job. Luckily I got moved to a new team before I completely lost it.

                                                                            1. 1

                                                                              This seems to be totally normal at midrange to massive companies. The tools are chosen for you by architects you may never meet, considering primarily what languages the offshore and onshore teams both know.

                                                                              It stops being about the technical fit, and becomes about the arguably harder human fit.

                                                                              Which is hilarious since developer productivity, the argument made when you say “best tool for the job,” is an entirely human factor.

                                                                            2. 4

                                                                              A language doesn’t stand on its own – its community affects the experience of using it. A language’s community:

                                                                              • Writes libraries for the language
                                                                              • Writes documentation for the language (in forums and Stack Overflow)
                                                                              • Devises new patterns to make working with the language more pleasant

                                                                              As those benefits are correlated to the size of the community, it’s reasonable for someone to want to convince others to use their preferred programming language and thus become part of its community.

                                                                              1. 8

                                                                                I wouldn’t call this spam, but I would guess the flagger might have been turned off by the page’s advertisements of the university that made the discovery and by the huge images of the authors that crowd out the interesting technical details. They may have also been annoyed that the page named the paper but did not link to it; hopefully the link in my other comment will help with that.

                                                                              1. 12

                                                                                Here’s the academic paper, which has much more technical detail than this press release: I See Dead µops: Leaking Secrets via Intel/AMD Micro-Op Caches (PDF)

                                                                                That paper was hard to find details about – searching for its title brought up only copies of the press release on various news sites. I found the link to the above PDF on https://www.cs.virginia.edu/venkat/.

                                                                                1. 1

                                                                                  Thank you. I couldn’t find the actual paper when I posted this link and at the time seemed like the most direct link to post.

                                                                                1. 5

                                                                                  The biggest gain of using make, for c/c++ developers is thar it doesn’t recompile what it doesn’t need. Which can represent a different of minutes to milliseconds.

                                                                                  Using make just for its command interface is, IMO, just glorified and to some extent worse shellscripting. I personally would pick a shellscript. Command dependencies are trivially to implement.

                                                                                  1. 4

                                                                                    If someone uses make just for its command interface, the tool they really want is just. See What are the idiosyncrasies of Make that Just avoids?

                                                                                    1. 4

                                                                                      Let me agree 95%:

                                                                                      The biggest gain of using Make for C/C++ development for any workflow is that it doesn’t redo what it doesn’t need. Fast and correct incremental builds has come to be – at least in the C/C++ realm – the defining characteristic of a buildsystem: If a no-change rebuild takes more than mere milliseconds, it is a bug, and if a buildsystem can’t do that, it is not a buildsystem.

                                                                                      Make’s selling point nowadays is generality: Forget C/C++: In this niche, CMake and Meson are easier to use correctly, but Make can be used for everything that reads and writes files.

                                                                                      Using Make just for its command interface may be innocent enough, but if you have recipes that do more than one command invocation, you are 100% definitely doing it wrong. This is what’s called a “glorified shellscript”, but the naming doesn’t matter: Because Make is so great, yet so horrible, it should either be used the beneficial way, by generating a target for every file, or that glorified shellscript recipe would be better off put in a script of its own.

                                                                                      1. 1

                                                                                        “What it doesn’t need”, in the case of make is defined as: what a typical C/C++ project on an unix environment doesn’t need. It doesn’t apply to most projects in most languages these days which have dependencies in the form of remote urls. Pretty much all of them.

                                                                                        Make’s selling point nowadays is generality: Forget C/C++: In this niche, CMake and Meson are easier to use correctly, but Make can be used for everything that reads and writes files.

                                                                                        It can be used for everything that reads and writes files as a worse replacement for shellscripts. Shells, which it very much relies upon. The advantages of make only manifest themselves under a set of conventions that are very much based on the GNU build system. How does it know what doesn’t need to be done in the presence of a home grown compiler?

                                                                                        Using Make just for its command interface may be innocent enough, but if you have recipes that do more than one command invocation, you are 100% definitely doing it wrong.

                                                                                        I’m confused. How is this a specific trait of make? It surely is trivially achievable with a shellscript.

                                                                                        1. 2

                                                                                          How does it know what doesn’t need to be done in the presence of a home grown compiler?

                                                                                          That is a fundamental question! If you have ever written a makefile rule, that is how: You tell it for each output file which files it depends on. The deal is that whenever an output file is needed and is older than one of its inputs, its recipe is run again. Simple.

                                                                                          As you see, there is nothing special about home grown compilers or even downloading files from the internet. For something completely different, I’ve used Make to run custom tests and linters in parallel, migrate a vast number of customer databases, run video encoders and produce objective metrics of encoded videos. It is a true programming language. If your experience is more from generated makefiles (since you mention the GNU build system), I can see how this may not be obvious.

                                                                                          Then, you say that all of this is trivially achievable with a shellscript, which is true, but would miss the whole point of just expressing this dependency tree and have it rebuilt lazily with implicit parallelization. You don’t get that for free in many other languages!

                                                                                    1. 7

                                                                                      My story with GDPR and CloudFlare.

                                                                                      Many years ago, I was using a vim plugin to to gist buffers. At some point, I must have changed the default behaviour from private to public. I was working with CloudFormation and the huge JSON files were not rendering properly, so I was using github to double-check my linter worked properly. As you can imagine there were credentials there, so … my credentials were leaked. Nearly 60s after the gist, we were notified by github. An email org’s email and quickly we rotated the creds.

                                                                                      About 1.5 years later a new hire, found the leaked file on another website that was copying and storying public gists. The website was behind cloudflare. I contacted the owner of the website to bring down the gist, mentioning GDPR. But not reply. The website was most likely abandoned. GDPR had just entered into action, so I sent a GDPR at cloudflare. The reply borderline ridiculous: “We can’t bring down your gist, because our systems don’t work like that blah blah blah”. We exchanged few emails, so I got tired and gave them a notice period of five days to bring down the website or else I’ll be contacting the German GDPR authority, explaining the website was behind cloudflare. Two days later the website, magically, came down. They sent no notice to me, I just went to visit the website and there was nothing there. Perfect timing? Maybe.

                                                                                      1. 8

                                                                                        It’s always fun how many things are only technically impossible until the threat of legal attention gets involved.

                                                                                        1. 4

                                                                                          I also use GDPR on shitty companies. For example Atlassian. I had a registration that got interrupted half way through and I could not continue or cancel the process. I reached out if support who did not believe me even though I sent them screenshots. After a while I got fed up and sent them a GDPR delete requests and after it went through I could finally register with that email. I plan to do the same with Google snd a Gmail address that is stuck sort of same way. I cannot change the phone number and the 2fa. Theres is no gmail support.

                                                                                          1. 1

                                                                                            I don’t understand. If you had already rotated the credentials 1.5 years ago, why did you care that there was a site out there that was displaying your old credentials?

                                                                                            Was it that you wanted to hide even the existence of some the services you were running?

                                                                                            1. 1

                                                                                              The infra was way different by that time, wasn’t a big issue. I was embarrassed I suppose.