1. 3

    I gave up and decided to email a patch to the author. I ran git format-patch -1 HEAD to package up my most recent commit, and sent that as an attachment.

    Seems like that would work fine… On the postgresql mailing list, for instance, people send regular messages with their mail client and attach patches.

    Drew wrote back … saying that they only accept patches sent through the git email protocol

    Why the insistence that git itself do the emailing? Can’t the file generated by format-patch be applied on the receiving end with “git apply” or even by the standard unix patch utility? The “git send-email or die” sounds like a bit of pedantry, unless perhaps it makes it easier for the committer to attach the desired author metadata.

    1.  

      The “git send-email or die” sounds like a bit of pedantry, unless perhaps it makes it easier for the committer to attach the desired author metadata.

      It makes it possible to review the patch inline. Even if git apply works with attachments it’s basically all-or-nothing. No way to comment on specific parts of the patch.

      1.  

        No way to comment on specific parts of the patch.

        Couldn’t you reply to the email containing the patch, quoting sections of the code? For instance, here’s commentary on a recent random postgres patch: https://www.postgresql.org/message-id/20190602214257.GA17525%40telsasoft.com

        Or am I misunderstanding what you mean by reviewing a patch inline?

        1.  

          Yes, that’s what I mean. If the patch is sent inline one just hits reply and the patch is reviewed well… inline.

          If the patch is attached as a file it needs to be copied to the response and then reviewed inline. Still doable… but extra work for no reason. Still this may be a good idea to include in e-mail clients that are specifically designed to handle patches (such as aerc).

    1. 1

      Correct me if I’m wrong, but doesn’t UTF-32 not actually take more space on disk? Since Unicode will only use as many bytes per code point as necessary, wouldn’t my words here stay at one byte per character? Edit: Ah, right, UTF-8 is the flexible one. I don’t know why anybody would even use the bigger ones.

      1. 1

        Right, UTF-8 is the flexible one. UTF-32 is useful when you want to inspect a whole codepoint at once in a program. Like you might store strings as UTF-16 in memory, but iterate over them with the U16_NEXT macro which extracts one codepoint at a time as UTF-32.

      1. 11

        I and a group of people are on this path as well. You can join us remotely. http://frostbyte.cc

        Two years ago I put together a reading list for myself with fundamental topics and have pretty much been following it: https://begriffs.com/posts/2017-04-13-longterm-computing-reading.html

        (As you can tell from my other lobste.rs submission, I just finished the Unicode section)

        1. 1

          Thanks! I’ll check it out.

        1. 1

          Bleh, kind of a weak article.

          I’ve been enjoying a Vim series by the “FrugalComputerGuy” (https://www.youtube.com/user/TheFrugalComputerGuy/videos)

          However my personal goal is to just RTFM and practice more. https://www.vi-improved.org/vimusermanual.pdf (Yeah, I know this is the contents of :help, but nice to be able to read/print as a PDF.)

          1. 0

            One unsaid disadvantage is, that your makefiles will most probably slowly grow and accumulate cruft when you’re not looking. At some later point in time, you may still understand what they’re doing, but no one else will. (That said, it doesn’t mean you’re automatically free from this concern with other build systems.)

            Another one is, IIRC, that make does not detect (and rebuild) when you remove a dependency file from disk. So, you’ll think the project still builds OK, until it hits your CI. Ah, you have no CI? Then until it hits your co-developer’s build time. Ah, you have no co-developer? Then, you might not realize you’ve broken your source control for a few years.

            1. 3

              make does not detect (and rebuild) when you remove a dependency file from disk

              Can you explain this one in more detail? I’m seeing a different result when I test it:

              Makefile

              hi : hi.o
              hi.o : hi.c hi.h
              

              hi.c

              #include "hi.h"
              int main(void)
              {
              	return 0;
              }
              

              hi.h

              /* hi */
              

              Now run this

              $ make
              cc    -c -o hi.o hi.c
              cc   hi.o   -o hi
              
              $ rm hi.h
              $ make
              make: *** No rule to make target `hi.h', needed by `hi.o'.  Stop.
              
              1. 2

                I think what parent says applies when the rules use a glob pattern which is quite common in Makefiles as far as I know.

                1. 1

                  The solution is to never use glob. Just specify all your source/object files in a variable.

                2. 1

                  Sorry, my recollection of the exact details is murky, I’m not an active user of make currently, and I remember the scenario may be somewhat tricky. I tried googling around with some keywords. Given that one doesn’t usually manage .h dependencies by hand, as they change too much, and instead defers it to gcc -MM, I found for example this description: https://stackoverflow.com/q/239004 Maybe that’s the issue? Sorry if you see my claims as FUDish. I think I’m right, but it was really long ago, so I can just close my mouth if that’s preferable. Otherwise, if someone knows what I’m trying to recall, I’d be super grateful for chiming in.

                  1. 3

                    It is certainly possible to write makefiles that don’t work with fresh source checkouts, but that’s possible with literally any build system. I don’t think you’re wrong, just overstating the problem a bit. And the fix is usually pretty easy: do a clean checkout from time to time and test that it builds. “A few years” seems a bit excessive.

                    1. 2

                      I would add “to a parallel make” as well. I’ve had projects build fine if done serially, but a parallel build would break. That happens if the dependencies aren’t specified correctly.

                      1. 1

                        Void Linux packages have an optional variable to disable parallel make for exactly this reason

              1. 5

                Clicks link. It’s gonna be GNU make, it’s gonna be GNU make, it’s gonna be GNU make…

                (GNU) Make is probably fine

                Confirmed.

                1. 2

                  Clicks link. Article is tagged ‘rant’, definitely a rant, yep it’s tagged ‘rant’ but here it is not.

                  1. 2

                    I agree. It’s just a series of counterpoints with lots of examples. The examples also let the reader judge for themselves. Rants usually have more rhetoric with fewer facts to check.

                    Meta: I’d normally suggest remove rant. The author classified his own page as a rant on that page. calvin might have added rant here for consistency. I guess we leave it on to avoid confusing readers.

                1. 34

                  Build systems are hard because building software is complicated.

                  Maybe it’s the first commit in brand new repository and all you have is foo.c in there. Why am I telling the compiler what to build? What else would it build??

                  Compilers should not be the build system, their job is to compile. We have abstractions, layers, and separation of concerns for a reason. Some of those reasons are explained in http://www.catb.org/~esr/writings/taoup/html/ch01s06.html. But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                  The good news is that for trivial projects, writing your own build system is likewise trivial as well. You could do it in a few lines of bash if you want. The author did it in 8 lines of Make but still thinks that’s too hard? I mean, this is like buying a bicycle to get you all around town and then complaining that you have stop once a month and spend 5 minutes cleaning and greasing the chain. Everyone just looks at you and says, “Yes? And?”

                  1. 5

                    The author could have done it in two if he knew Make. And no lines if he just has a single file project. One of the more complex projects I have uses only 50 lines of Make, with 6 lines (one implicit rule, and 5 targets) doing the actual build (the rest are various defines).

                    1. 3

                      What are the two lines?

                      1. 4

                        I’m unsure what the two lines could be, but for no lines I think spc476 is talking about using implicit rules (http://www.delorie.com/gnu/docs/make/make_101.html) and just calling “make foo”

                        1. 2

                          I tried writing it with implicit rules. Unless I missed something, they only kick in if the source files and the object files are in the same directory. If I’m wrong, please enlighten me. I mentioned the build directory for a reason.

                          1. 2

                            Right, the no lines situation only applies for the single file project setup. I don’t know what are the 2 lines for the example given in the post.

                        2. 3

                          First off, it would build the executable in the same location as the source files. Sadly, I eventually gave up on a separate build directory to simplify the makefile. So with that out of the way:

                          CFLAGS ?= -Iinclude -Wall -Wextra -Werror -g
                          src/foo: $(patsubst %.c,%.o,$(wildcard src/*.c))
                          

                          If you want dependencies, then four lines would suffice—the two above plus these two (and I’m using GNUMake if that isn’t apparent):

                          .PHONY: depend
                          depend:
                              makedepend -Y -- $(CFLAGS) -- $(wildcard src/*.c) 
                          

                          The target depend will modify the makefile with the proper dependencies for the source files. Okay, make that GNUMake and makedepend.

                        3. 1

                          Structure:

                          .
                          ├── Makefile
                          ├── include
                          │   └── foo.h
                          └── src
                              ├── foo.c
                              └── prog.c
                          

                          Makefile:

                          CFLAGS = -Iinclude
                          VPATH = src:include
                          
                          prog: prog.c foo.o
                          foo.o: foo.c foo.h
                          

                          Build it:

                          $ make
                          cc -Iinclude   -c -o foo.o src/foo.c
                          cc -Iinclude    src/prog.c foo.o   -o prog
                          
                          1. 1

                            Could you please post said two lines? Thanks.

                            1. 4

                              make could totally handle this project with a single line actually:

                              foo: foo.c main.c foo.h
                              

                              That’s more than enough to build the project (replace .c with .o if you want the object files to be generated). Having subdirectories would make it more complex indeed, but for building simple project, we can use a simple organisation! Implicit rules are made for a case where source and include files are in the same directory as the Makefile. Now we could argue wether or not it’s a good practice or not. Maybe make should have implicit rules hardcoded for src/, include/ and build/ directories. Maybe not.

                              In your post you say that Pony does it the good way by having the compiler be the build system, and build project in a simple way by default Maybe ponyc is aware of directories like src/ and include/, and that could be an improvement over make here. But that doesn’t make its build system simple. When you go the the ponylang website, you find links to “real-life” pony projects. First surprise, 3 of them use a makefile (and what a makefile…): jylis, ponycheck, wallaroo + rules.mk. One of them doesn’t, but it looks like the author did put some effort in his program organisation so ponyc can build it the simple way.

                              As @bityard said, building software is complex, and no build system is smart enough to build any kind of software. All you can do is learn to use your tool so you can make a better use of them and make your work simpler.

                              Disclaimer: I never looked at pony before, so if there is something I misunderstood about how it works, please correct me.

                          2. 2

                            Build systems are hard because building software is complicated.

                            Some software? Yes. Most software? No. That’s literally the point of the first paragraph of the blog.

                            Compilers should not be the build system

                            Disagree.

                            We have abstractions, layers, and separation of concerns for a reason

                            Agree.

                            But the bottom line is if you ask a compiler to start doing build system things, you’re going to be frustrated later on when your project is complex and the build system/compiler mix doesn’t do something you need it do.

                            Agree, if “the compiler’s default behaviour is the only option. Which would be silly, since the blog’s first paragraph argues that some projects need more than that.

                            The good news is that for trivial projects, writing your own build system is likewise trivial as well

                            I think I showed that’s not the case. Trivial is when I don’t have to tell the computer what it already knows.

                            The author did it in 8 lines of Make but still thinks that’s too hard?

                            8 lines is infinity times the ideal number, which is 0. So yes, I think it’s too hard. It’s infinity times harder. It sounds like a 6 year old’s argument, but it doesn’t make it any less true.

                            1. 7

                              I have a few projects at work that embed Lua within the application. I also include all the modules required to run the Lua code within the executable, and that includes Lua modules written in Lua. With make I was able to add an implicit rule to generate .o files from .lua files so they could be linked in with the final executable. Had the compiler had the build system “built in” I doubt I would have been able to do that, or I still would have had to run make.

                              1. -1

                                Compilers should not be the build system

                                Disagree.

                                Please, do not ever write a compiler.

                                Your examples are ridiculous: using shell invocation and find is far, far from the simplest way to list your source, objects and output files. As other pointed out, you could use implicit rules. Even without implicit rules, that was 2 lines instead of those 8:

                                foo: foo.c main.c foo.h
                                        gcc foo.c main.c -o foo
                                

                                Agree, if “the compiler’s default behaviour is the only option.

                                Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in? This is an insane proposition, when the current solution is either the team writing the project configuring the build system as well (could be done in shell, for all that matters), or thin wrappers like Rust and Go are using around their compilers: they foster best practices while leaving the flexibility needed by heavier projects.

                                You seem so arrogant and full of yourself. You should not.

                                1. 3

                                  I’d like to respectfully disagree with you here.

                                  Ah, then you want the compiler to embed in its code a way to be configured for every and all possible build that it could be used in?

                                  That’s not at all what he’s asking for.

                                  This is an insane proposition

                                  I think this is probably true.

                                  You seem so arrogant and full of yourself. You should not.

                                  Disagree. He’s stated his opinion and provided examples demonstrating why he believe’s his point is valid. Finally, he has selectively defended said opinion. I don’t think that’s arrogance at all. This, for example, doesn’t read like arrogance to me.

                                  I don’t appreciate the name calling and I don’t think it has a place here on lobste.rs.

                                  1. -3

                                    What is mostly arrogant is his dismissal of “dumb” tools, simple commands that will do only what they are asked to do and nothing else.

                                    He wants his tools to presume his intentions. This is an arrogant design, which I find foolish, presumptuous, uselessly complex and inelegant. So I disagree on the technical aspects, certainly.

                                    Now, the way he constructed his blog post and main argumentation is also extremely arrogant or in bad faith, by presenting his own errors as normal ways of doing things and accusing other people to build bad tools because they would not do things his way. This is supremely arrogant and I find it distasteful.

                                    Finally, his blog is named after himself and seems a monument to his opinion. He could write on technical matters without putting his persona and ego into it, which is why I consider him full of himself.

                                    My critic is that beside his technical proposition, which I disagree with, the form he uses to present them makes him a disservice by putting people he interacts with on edge. He should not if he wants his writings to be at all impactful, in my opinion.

                                    1. 2

                                      the form he uses to present them makes him a disservice by putting people he interacts with on edge

                                      Pot, meet Kettle.

                                      Mirrors invite the strongest responses.

                              2. 1

                                yeah. on the flip side we have that too much configuration makes overcomplicated build systems. For me, there’s a sweet spot with cmake.

                              1. 2

                                Our local hack group (frostbyte.cc) has set up a tier 1 node on the dataforge network. We’ve been experimenting sending single files, and I’m curious to learn how to go from that to implementing newsgroups.

                                1. 2

                                  Hm, interesting that people would get it confused. I guess the article says that in old operating systems there really was a special EOF character saved in the file.

                                  In the C standard library EOF is just an artifact of the API, a sentinel value outside the range of unsigned char. The C99 spec section 7.19.1 says

                                  EOF expands to an integer constant expression, with type int and a negative value, that is returned by several functions to indicate end-of-file, that is, no more input from a stream

                                  It needn’t be -1, but that’s the usual choice.

                                  1. 1

                                    Good. Now we are one step closer to git over 9P. Only need to find a couple of person-hours to do it.

                                    1. 1

                                      What would be the difference to Microsofts GVFS? https://github.com/Microsoft/VFSForGit

                                      1. 1

                                        That was an interesting video. Cool idea to retrieve the file blobs on request rather than up front.

                                        The motivation section was a little scary though, like how does the windows source code require 270gb? That sounds bloated beyond my wildest imagination.

                                        1. 1

                                          I assume there are plenty of non-source-code assets in there: Images, HiDPI icons, fonts, Photoshop files, recordings, etc.

                                          1. 1

                                            Yeah must be.

                                            Another (less intensive than GVFS) way to handle those kind of assets is with something like git-annex. Haven’t personally tried either solution though.

                                            1. 1

                                              We use Git LFS at work and it is quite fragile.

                                    1. 1

                                      is possible stored file and meybe git repository on public places? WWW comments, pastebin, twitter, blogs etc.

                                      Imagine infinite disk and…. anonimity

                                      1. 1

                                        This project doesn’t change the storage of git repositories, it merely provides a read-only FTP interface on top.

                                        1. 1

                                          if You write comment or paste bin You dont change enything (read-only)

                                      1. 8

                                        Working on laarc. https://www.laarc.io

                                        Traffic exploded since last week. We’re up to >200 accounts, and the graphs are funny. https://imgur.com/a/4CMFcaT

                                        Laarc got some exposure on HN before the mods whisked the comment to the bottom of the thread. https://news.ycombinator.com/item?id=19126833

                                        https://www.laarc.io/place is filled up, and I’m not sure what to do with it next. I can’t make it bigger without killing performance. (It’s already unusable on mobile safari.) And it’s probably time to start thinking about the next thing.

                                        Any ideas for a random project like that? The origin of /place was basically “I think I’ll recreate /r/place today.”

                                        Laarc also had the dubious distinction of having the first downvoted comment today. It’ll be interesting to see how the community dynamics play out. Some people are asking to get rid of downvoting entirely, but I’m not sure whether that would be best.

                                        Maintaining >5% weekly growth is really hard. Last week it was “I wonder where I can find +15 people each day.” This week it’s “I wonder where I can find +50 people.”

                                        1. 1

                                          Can you explain the vision for laarc? I joined in January at the suggestion of nickpsecurity, but I can’t yet tell what’s special about the site. Looks just like HN, with a large overlap in submissions. Is it the quirky addons like /place, or associated realtime chat rooms that are building a tighter-knit community?

                                        1. 9

                                          Whew, that new format is repetitive:

                                          targets = [ "//:satori" ]
                                          
                                          [[dependency]]
                                          package = "github.com/buckaroo-pm/google-googletest"
                                          version = "branch=master"
                                          private = true
                                          
                                          [[dependency]]
                                          package = "github.com/buckaroo-pm/libuv"
                                          version = "branch=v1.x"
                                          
                                          [[dependency]]
                                          package = "github.com/buckaroo-pm/madler-zlib"
                                          version = "branch=master"
                                          
                                          [[dependency]]
                                          package = "github.com/buckaroo-pm/nodejs-http-parser"
                                          version = "branch=master"
                                          
                                          [[dependency]]
                                          package = "github.com/loopperfect/neither"
                                          version = "branch=master"
                                          
                                          [[dependency]]
                                          package = "github.com/loopperfect/r3"
                                          version = "branch=master"
                                          

                                          How about a simple .ini?

                                          name = satori
                                          
                                          [deps]
                                          
                                          libuv/libuv         = 1.11.0
                                          google/gtest        = 1.8.0
                                          nodejs/http-parser  = 2.7.1
                                          madler/zlib         = 1.2.11
                                          loopperfect/neither = 0.4.0
                                          loopperfect/r3r     = 2.0.0
                                          
                                          [deps.private]
                                          
                                          buckaroo-pm/google-googletest = 1.8.0
                                          
                                          1. 6

                                            TOML certainly is repetitive. YAML, since it hasn’t come up yet, includes standardized comments, hierarchy, arrays, and hashes.

                                            ---
                                            # Config example
                                            name: satori
                                            dependencies:
                                              libuv/libuv: 1.11.0
                                              google/gtest: 1.8.0
                                              nodejs/http-parser: 2.7.1
                                              madler/zlib: 1.2.11
                                              loopperfect/neither: 0.4.0
                                              loopperfect/r3: 2.0.0
                                            

                                            More standards! xkcd 792. I’m all for people using whatever structured format they like. The trouble is in the edges and in the attacks. CSV parsers are often implemented incorrectly and explode on complex quoting situations (the CSV parser in ruby is broken). And XML & JSON parsers are a popular vectors for attacks. TOML isn’t new of course, but it does seem to be lesser used. I wish it luck in its ongoing trial by fire.

                                            1. 1

                                              YAML already has wide support so it’s quite odd it hasn’t been mentioned yet

                                            2. 5

                                              Toml can be written densely too, e.g. (taken from Amethyst’s cargo.toml):

                                              [dependencies]
                                              nalgebra = { version = "0.17", features = ["serde-serialize", "mint"] }
                                              approx = "0.3"
                                              amethyst_error = { path = "../amethyst_error", version = "0.1.0" }
                                              fnv = "1"
                                              hibitset = { version = "0.5.2", features = ["parallel"] }
                                              log = "0.4.6"
                                              rayon = "1.0.2"
                                              serde = { version = "1", features = ["derive"] }
                                              shred = { version = "0.7" }
                                              specs = { version = "0.14", features = ["common"] }
                                              specs-hierarchy = { version = "0.3" }
                                              shrev = "1.0"
                                              
                                              1. 4

                                                More attributes are to come. For example, groups:

                                                [[dependency]]
                                                package = "github.com/buckaroo-pm/google-googletest"
                                                version = "branch=master"
                                                private = true
                                                groups = [ "dev" ]
                                                
                                                1. 1

                                                  Makes sense, I don’t see an obvious way to encode that in the ini without repeating the names of deps in different sections.

                                              1. 3

                                                Very cool.

                                                In practical terms, I wonder when one would want to use Haskell in C versus using e.g., Inline-C in Haskell.

                                                I have only tried this once, when trying to write a special purpose static site generator; I had to build a couple million pages as quickly as possible, and Inline-C gave me a significant boost.

                                                1. 5

                                                  Probably if you want a little bit of C in a primarily Haskell project then inline-c would be the most convenient way to go. Whereas boosting a C program with some Haskell parsing or whatever would probably be easiest with this makefile+ghc approach. It also appears that inline-c depends on template haskell, so might be a little slower to build.

                                                  1. 4

                                                    Take myself as an example:

                                                    At my job, we have a data processing pipeline that I developed in Haskell last year. Unfortunately, the “data stream provider” that I used to feed the pipeline is no longer supported inside the company. The currently supported way of doing this sort of stuff is to use an in-house C++ framework.

                                                    The problem is that rewriting all of that Haskell in C++ is error prone and doesn’t really move the project forward.

                                                    I think what this post offers will be useful to me. I’ll try to the new C++ framework just to handle to IO part and keep all the business logic in the current Haskell code.

                                                    1. 3

                                                      I think the use cases are similar to how we do it in CHICKEN. If you want to use Haskell as a scripting/extension library for a mostly C-based project, I’d use the approach from this link. If you just want to speed up something or do an off-the-cuff C API call, inline C would be the way to go.

                                                      So basically, it depends if you’re thinking Haskell-first or C-first.

                                                    1. 8

                                                      A few notes on this otherwise excellent post.

                                                      C99 provides a macro SIZE_MAX with the maximum value possible in size_t. C89 doesn’t have it, although you can obtain the value by casting (size_t)-1. This assumes a twos’ complement architecture, which is the most common number representation on modern computers. You can enforce the requirement like this: […]

                                                      This actually assumes nothing and is perfectly portable because the standard says so.

                                                      From C89 (draft), “3.2.1.2 Signed and unsigned integers” (emphasis mine):

                                                      When a signed integer is converted to an unsigned integer with equal or greater size, if the value of the signed integer is nonnegative, its value is unchanged. Otherwise: if the unsigned integer has greater size, the signed integer is first promoted to the signed integer corresponding to the unsigned integer; the value is converted to unsigned by adding to it one greater than the largest number that can be represented in the unsigned integer type.

                                                      The rationale was explicitly to avoid a change in the bit pattern except filling the high-order bits.


                                                      DOS used \n\r for line endings.

                                                      As far as I know, DOS used \r\n and Mac OS (classic) used \n\r. EDIT: Mac OS (classic) used \r.


                                                      OpenBSD provides arc4random() which returns crypto-grade randomness.

                                                      arc4random() is also available on FreeBSD, NetBSD and macOS.

                                                      1. 8

                                                        arc4random() is also available on FreeBSD, NetBSD and macOS.

                                                        And on illumos systems!

                                                        1. 4

                                                          MS-DOS used ‘\r\n’ (in that order). Classic Mac OS (pre-OS X) used ‘\r’ and Unix has always used ‘\n’.

                                                          The wide character stuff was the most interesting to read.

                                                          1. 1

                                                            I stand corrected; thanks. I’ll fix my comment.

                                                          2. 3

                                                            Thank you (and the other commenters) for your corrections, I’ll update the article. Learning from the discussion is another great thing about writing these articles.

                                                            1. 0

                                                              arc4random() is also available on FreeBSD, NetBSD and macOS.

                                                              This is also helpfully defined in stdlib.h on linux and can be linked to with -lbsd.

                                                              1. 4

                                                                libbsd is a thirdparty library with its own issues because it can’t decide which BSD to use as source, resulting in different APIs and random breaks when they switched between implementations. https://cgit.freedesktop.org/libbsd/commit/?id=e4e15ed286f7739682737ec2ca6d681dbdd00e79

                                                                1. 3

                                                                  fwiw that doesn’t appear to have affected the arc4random* functions. The core issue seems to be different signatures between the bsds. While changing implementations may be a problem for code using libbsd to port programs, it isn’t as big of a problem for software written with the library in mind. Additionally, the arc4random functions appear to have consistent signatures across the BSDs, so a breacking change like what you linked wouldn’t be necessary. As it is, libbsd is an easy and sane way to get random numbers across different unixes.

                                                            1. 4

                                                              Some of the things in the blog post like := or ?= don’t appear in the posix spec for make. Are they GNU’isms?

                                                              1. 7

                                                                Yes, along with $(shell ... ). The author should have mentioned he was using GNUMake.

                                                                1. 1

                                                                  := is almost mandatory for makefiles. If you have a shell expansion it will get run every time unless you use :=. Many of the extensions in Gnu make are simply unreproducable in posix make.

                                                                1. 1

                                                                  So how do you manage char* and void* needing to be different addresses? Sounds like a recipe for serious breakage

                                                                  1. 1

                                                                    Can you explain the context or machine you have in mind? I don’t know what you mean about these types needing to be different addresses.

                                                                    1. 1

                                                                      The article said “This machine uses a different numbering scheme for character- and integer-pointers. The same location in memory must be referred to by different addresses depending on the pointer type. A cast between char* and int* actually changes the address inside the pointer.” And I extrapolated from that to assume no pointer change is necessarily safe.

                                                                      1. 1

                                                                        The compiler will translate the addresses properly when you cast the pointers. After the cast it’ll point at the same place in memory, but using a different (type-appropriate) address. The question of when it even makes sense to cast from int* to char* is another matter since that kind of thing might run into endian issues. But yeah the DG Eclipse is an interesting machine and helps discredit the notion that a pointer is “just an address.” Pointers are addresses with types. Of course the type matters when incrementing pointers too.

                                                                        1. 1

                                                                          Actually I ran into a case for casting from char* to another pointer type. In stdarg.h the va_list type can be implemented as char*, moved between arguments, and then casted to whatever T* is needed to read the argument as a given type.

                                                                  1. 8

                                                                    I’m curious, how many of you are using Mutt as your daily email client at work? How do you cope with calendar invites, frequent HTML emails, …?

                                                                    1. 3

                                                                      I use mutt for personal email, so calendar invites is not an issue for me. I also have mutt use lynx to handle the case when the sender only sent HTML (usually, if there’s an HTML section, there’s also a plain text section). For work, I use whatever they give me—I like keeping a separation between personal and work stuff.

                                                                      1. 1

                                                                        Do you mean invites aren’t an issue because you don’t use them or because you solved this? If so, how?

                                                                        I read in another comment that it’s just html, and to be fair as I come to think of it, it’s been a long time since I had to care about mutt and calendars, so maybe it was just a dumb link to click through the terminal browser.

                                                                        1. 2

                                                                          I don’t use invites or calendar things via personal email, and if anyone has sent me one, I haven’t noticed.

                                                                          I did start using mutt at a previous job where I had to chew through a ton of daily mail (basically, all email sent to root on all our various servers were eventually funneled to me) and I found mutt to be much faster than Thunderbird (which should indicate how long ago this was). It was using mutt for a few weeks that prompted me to switch away from elm (which really dates me).

                                                                      2. 3

                                                                        IIRC, when I used mutt regularly, I used to have it pipe html emails straight into elinks to render them inside mutt. Didn’t need calendaring at the time.

                                                                        1. 2

                                                                          I gave up my resistance of modern email quite some time ago; it’s simply too much hassle, personally speaking, dealing with calendaring and rich media content in email to still use a console based MUA, but that being said I really miss the simplicity and lightweight of Mutt.

                                                                          Mutt was my go-to client for many, many years, and I feel tremendous nostalgia when I am reminded that it’s still actively maintained and indeed has a user base. Bravo. :-)

                                                                          1. 2

                                                                            How many emails do you handle a day? I do about 200, though I need to read or skim all, I only reply to about 1/10th of them… but I can’t imagine keeping up with that in any of the gui clients I have had. With mutt, it feels like nothing.

                                                                            1. 1

                                                                              I’m trying to do more and more with mutt, gradually using the GUI client less. Still haven’t configured a convenient way to view html or attached images but the message editing is nice. I hook it up to vim:

                                                                              set editor='vim + -c "set ft=mail" -c "set tw=72" -c "set wrap" -c "set spell spelllang=en"'
                                                                              

                                                                              This mostly formats things correctly, and allows me to touch paragraphs up by hand or with the “gq” command. I can also easily add mail headers such as In-Reply-To if needed. In some ways my graphical client is starting to feel like the constrained one.

                                                                            2. 2

                                                                              I’ve been using Mutt for the past 15+ years for personal email and 5+ years for work - even with Exchange IMAP (special flavour) at one point.

                                                                              I mostly ignore HTML email - either there’s a text/plain part or HTML->text conversion is good enough - there are occasional issues with superfluous whitespace and it can look a bit ugly when plenty of in-line URLs are being used but these are not that common.

                                                                              For calendaring I still use web - we’re on G Suite - but am hoping to move to Calcurse at some point (still not sure how to accept invites, though). Bear in mind, calendar != email, and Mutt is an email client - once you accept it, you’ll be much happier :^)

                                                                              1. 1

                                                                                I used it 2015-mid 2017 but ended up moving back to Thunderbird and even web clients. It wasn’t worth the effort. If I didn’t have to handle all my configs to get a decent setup (imap, gpg, multi-account, addresses) then I’d consider using it again. I love the idea of not having to leave my term.

                                                                                1. 1

                                                                                  I use mutt daily and have my mailcap set to render html email in lynx/w3m/elinks. It’s sufficient to see if I then need to switch to a GUI mail client. For GUI, I have previously used Thunderbird with DAVmail and currently just use the Outlook client.

                                                                                  1. 1

                                                                                    I use (neo)mutt as my daily personal email. HTML isn’t an issue, but forwarding attachments and dealing with calendar invites is embarrassing.

                                                                                    Usually I use the bounce feature into my work email (Protonmail), which causes spf-related spam flags to get set, but generally gets the job done.

                                                                                    I self-host my email so the pain threshold is quite high for me to start configuring RoundCube (or whatever the kids today use) or even IMAPS.

                                                                                    PS. not using Google is a bit embarrassing as well, as the email and Nextcloud calendar are so disconnected, but it works better than mutt ;)

                                                                                  1. 2

                                                                                    If votes are actually being counted, I vote nah.

                                                                                    I haven’t noticed these “growing pains” except in the increasing numbers of dismayed meta posts like this one.

                                                                                    1. 2

                                                                                      curl author:

                                                                                      I’m leaving Mozilla

                                                                                      lobste.rs:

                                                                                      curl author is leaving Mozilla!

                                                                                      me: Uh, OK.