1. 9

      This problem could potentially be used to feed process-controlled data to all tools relying on reading /proc/<pid>/stat using the recommended method (which includes several monitoring tools).

      This was also a problem for sudo.

      1. 8

        This really is an apples to oranges comparison - and it didn’t have to be. Yes, you can tunnel ports with ssh (oranges).. but you can also do full on (layer 1, layer2) Virtual Private Networking (apples)!

        From the ssh(1) man page:

             ssh contains support for Virtual Private Network (VPN) tunnelling using
             the tun(4) network pseudo-device, allowing two networks to be joined
             securely.  The sshd_config(5) configuration option PermitTunnel controls
             whether the server supports this, and at what level (layer 2 or 3
             The following example would connect client network with
             remote network using a point-to-point connection from
    to, provided that the SSH server running on the gateway
             to the remote network, at, allows it.
             On the client:
                   # ssh -f -w 0:1 true
                   # ifconfig tun0 netmask
                   # route add
             On the server:
                   # ifconfig tun1 netmask
                   # route add
             Client access may be more finely tuned via the /root/.ssh/authorized_keys
             file (see below) and the PermitRootLogin server option.  The following
             entry would permit connections on tun(4) device 1 from user “jane” and on
             tun device 2 from user “john”, if PermitRootLogin is set to
               tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane
               tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john
             Since an SSH-based setup entails a fair amount of overhead, it may be
             more suited to temporary setups, such as for wireless VPNs.  More
             permanent VPNs are better provided by tools such as ipsecctl(8) and
        1. 1

          Isn’t simple tunneling over SSH prone to TCP-over-TCP issues? Something like sshuttle would avoid these, as should OpenVPN over UDP.

        1. 9

          I use them to guess at impact of some of my comments. Popularity of specific comments also gives me a feel for what people in this community accept, reject, or find contentious. That’s been useful in adapting style or content to the audience. Sometimes the votes also give me WTH reactions or show no activity in ways I don’t learn anything from. I shrug and move on.

          Now, I could see an optional feature in profiles for people like you that hides the scores of comments. If you turn it on, you don’t see them. I also thought back on HN about something similar for names to force elimination of unconscious bias when reading comments.

          1. 5

            The way I understood @mordae they weren’t proposing to hide the score of comments. These do serve a useful function, as you explain. Instead their idea was to not sum these comment (and story) scores up and display them when logged in (own score at the top right) or at user profiles.

            I actually like that comments are not anonymized. It allows me to get to know users a little over time and therefore better understand the context of their comments. That way it’s possible to understand why they are saying what they are saying without them having to explicitly state that in every single comment.

            1. 2

              If my concept, the anonymization would have a feature to expose users’ identity. Just let you respond unbiased or semibiased before doing that if you chose. I agree with you on benefits of it not being anonymous.

              1. 3

                I, and maybe other crustaceans often search in the comments for security related articles for nickpsecurity though, so anonymity would be to your detriment :)
                Our views on politics differ, but I don’t read many of the political stories on lobsters, or I just ignore the comments. I’d like to think most of us are mature enough to not hold a grudge and down vote every comment from a user. Who knows, maybe that kind of grudge behaviour is automatically detected already?

                1. 2

                  Very kind of you, Sir! :) Yeah, I don’t let politics get in the way of good, tech discussion with interesting people. That’s childish. I also keep an eye out for specific people. The idea I had when I looked into doing that anonymously was to put them on a whitelist that highlights whatever the automatic alias is when it’s someone on the list. You can always do something to reveal the person but knowing it’s one you follow in general or for specific tags might be enough.

                  “ I’d like to think most of us are mature enough to not hold a grudge and down vote every comment from a user. “

                  There was at least one case here in the past. I don’t think it happens much, My tool was more for people voluntarily removing bias from reading or replying rather than just downvotes. It was a general thing I was looking into instead of just for Lobsters.

          1. 2

            How would you write a path that points to a file called ‘..’ in a directory called dir?

            You don’t. The ‘..’ is an implicit hard link and you cannot override it with a link to anything else.

            Is there a system that allows directory and file names to overlap?

            1. 4

              The author writes:

              Yes, then you need to escape dots: dir/\.\.. Sigh. And there even are multiple ways to do that. dir/\.., for example. Bigger sigh.

              I really wonder where they got the idea for backslash-escaping ., .. and /. That simply doesn’t work.

            1. 16

              This article is so confused it actually made me wonder if I might have missed some important information about paths all these years, which could have caused that…

              If /directory/file points to a file called file in a directory called directory, where does /directory//file point? Is it a file called file in a directory called directory/? Is it a file called /file in a directory called directory?

              / isn’t allowed to be part of a filename.

              It turns out this is implementation-specific.

              Only the treatment of // at the beginning of the path is implementation specific.

              Luckily, most of the time multiple slashes are to be treated as a single slash.

              The pathname resolution is covered by POSIX.

              / is a path that points to the root directory, whatever that means.

              As explained in the link above it’s the root directory of the current process, which indeed might not be the same for all processes.

              Escaping / (and other characters) is usually done with a backslash (\) character.

              As mentioned above / isn’t allowed to be a part of a filename. Even if you precede it by \ it is still not part of the filename, but always treated as a path separator. Thus dir\/file refers to a file called file located in a directory called dir\.

              What does ‘/.’ mean?

              The link above states that “[t]he special filename dot shall refer to the directory specified by its predecessor.” And since / refers to the root directory of the process, so does /.. POSIX doesn’t have a concept of file extensions.

              Never mind that not every user need necessarily have a home directory or that the concept of a ‘user’ may not even exist!

              The second link refers to the Wikipedia page for unikernels. I don’t see what a unikernel has to do with a critique POSIX concepts like files or users. It doesn’t implement any of them and therefore is irrelevant in a discussion about them.

              This means that whenever paths are passed as arguments in a list of arguments that is separated by spaces, spaces need to be escaped.

              That’s not an issue with paths themselves, but with how shells treat whitespace. For example there’s no need to escape whitespace when passing paths with C’s execve().

              rm won’t let you remove a symlink to a directory if there’s a slash at the end

              That is because the trailing slash causes the path to be resolved to the target of the symlink, which is a directory, which rm doesn’t remove unless -r, -R or -d (some implementations) is specified. Compare the output of stat my-symlink-to-a-directory and stat my-symlink-to-a-directory/.

              1. 2

                As it turns out, the concept of paths is not unique to POSIX systems.

                1. 3

                  All of these complaints seem to be aimed at specific implementations of that concept, not the idea itself.

                  The implementation in question quite clearly being POSIX

                  1. 1

                    He’s discussing different peculiarities and incompatibilities between different implementations, one of them being POSIX. He mentions many attributes not relevant to POSIX so I would say it’s clear he’s not only discussing POSIX.

              1. 30

                It’s been a pleasure orchestrating this hand-off with both of you, @pushcx and @jcs.

                If any of your experience performance degradation or see any error messages please feel free to reach out. I expect we’ll do some performance tuning as we subject the server to it’s normal load. For reporting slowness, I would also be helped to see a traceroute from your location.

                1. 13

                  Congratulations on a smooth migration!

                  1. 4

                    Thanks for your work. Did the favicon go missing?

                    1. 5

                      They did, yes. Some quirk of the deployment is copying the files into a nested subdir. I’ve manually fixed them for now and we’ll keep debugging.

                      EDIT: This is fixed now.

                      1. 1

                        I can access favicon.ico. Does that link work for you?

                        EDIT: I was seeing a cached entry.

                    1. 9

                      A few other methods:

                      libetc is a LD_PRELOAD-able library, which intercepts opening of dotfiles under $HOME and opens them from $XDG_CONFIG_HOME instead.

                      rewritefs is a FUSE filesystem which lets you configure rewriting of paths similar to Apache HTTPd’s mod_rewrite. You can configure it to perform a mapping of $HOME/.* to $XDG_CONFIG_HOME/* as well.

                      1. 3

                        The description from libetc reads as follows:

                        “On my system I had way too much dotfiles […] For easier maintenance I wrote libetc.”

                        Really, why should I care? They do not pop up during ls, they will be backuped like all other files and the most important ones live in a git repo. LDPRELOADing a lib just to have a clean $HOME seems a lot like being a Unix Hipster. Or maybe I am getting just old…

                      1. 1

                        “FreeBSD continues to defy the rumors of its demise.” That’s a strange opening statement, or did I miss something?

                        1. 2

                          It’s a reference to the long standing BSD is dying joke.

                          1. 3

                            “OpenBSD leader Theo states that there are 7000 users of OpenBSD. How many users of NetBSD are there? Let’s see. The number of OpenBSD versus NetBSD posts on Usenet is roughly in ratio of 5 to 1. Therefore there are about 7000/5 = 1400 NetBSD users. BSD/OS posts on Usenet are about half of the volume of NetBSD posts. Therefore there are about 700 users of BSD/OS. A recent article put FreeBSD at about 80 percent of the *BSD market. Therefore there are (7000+1400+700)*4 = 36400 FreeBSD users. This is consistent with the number of FreeBSD Usenet posts. “

                            Oh wow. This kind of mathematical analysis on determining number of users/systems could get whoever wrote that a job at RIAA.

                            1. 2

                              Fewer than I would have guessed. Thin ice.

                            2. 1

                              Ha! Shows how much I know… Funny though.

                          1. 4

                            This is by the author of Synth who was kicked out of the FreeBSD ports community in a storm of controversy. While I don’t know anything about Ravenports I wish this effort would be spent on making Nix more universal. Or at least implementing Nix in something other than a hot mess of C++ code. And I hate manifest files, why can’t the package manager figure that out for me on install!??!

                            1. 3

                              I guess this is John’s way to make Nix more universal :) Developing dports and retiring pkgsrc moved DragonFly a big step forward.

                              1. 1

                                Would you have any links to the discussion(s) surrounding the switch from pkgsrc to dports? I’m curious about the details.

                              2. 2

                                Wait, what?! I just knew him as a long-time dports maintainer in DFly, had no idea he had since been kicked out of the FreeBSD ports. I guess I missed another layer of controversy in *BSD!

                                For how relatively small all the *BSD projects are, and being all volunteer-based effort, it’s quite amazing how often folks get ‘fired’ from the various projects. Curious — does it happen in the Linux world as often?!

                              1. 2

                                Looking at the solutions for corrupted_text I see I’m not the only one who solved it the easy way. :)

                                1. 1

                                  I just now realized the like to the PostScript version is the wrong one. Here’s the actual link.

                                  1. 4

                                    Zstandard is also being added to ZFS in FreeBSD and will make its way to upstream OpenZFS from there.

                                    1. 3

                                      Is there some context here for the unfamiliar?

                                      1. 14

                                        r (remote) commands like rcp, rlogin, rwho, etc.. (NOT rsync) are/were old unix commands largely superceded by their ssh alternatives about 15 years ago or so.

                                        They’re unencrypted, and from a time where security basically didn’t exist as an afterthought. Ideally they should be removed outright and relegated to the ports tree for anyone crazy enough to want to use them.

                                        The last time I had to use them was for a Solaris install of Oracle about 5 years ago, they stick around generally because nobody updates the scripts that use them. And for Oracle, god knows why they took so stupid long to not depend on the r commands to do their stupid db installs.

                                        Basically, these things are like ftp/telnet only for stuff you might do as root to copy stuff to other boxes or use to login and run commands on other boxes. But to use them you have to punch giant titanic sized holes through os security and allow stuff like man in the middle attacks across potentially untrusted network traffic.

                                        Good riddance to now obsolete rubbish in essence.

                                        1. 0

                                          It is impressive to see how much things have changed, in a short time. Computers are ubiquitous these days.

                                        2. 1

                                          There was some discussion on freebsd-arch@freebsd.org.

                                        1. 7

                                          Reminds me of when Google asked them to stop using their timeservers as a default.

                                          1. 2

                                            Although google seems to have changed their mind, since now they say you can use their servers.

                                            1. 3

                                              Cloud Platform changes a lot of things. It’s now useful to have the same smeared concept of time as Google since you’re interacting with storage systems which also have the same smeared time. Additionally I assume it means SREs support public NTP now vs. potentially a SWE team.

                                              I still doubt they’d want it as the default time server for a distro.

                                          1. 4

                                            At suckless, we also use a config.mk approach.

                                            I’m not a fan of the proposed solution by the author, as it uses GNU extensions unnecessarily and is thus non-portable.

                                            Let me give you an example for a suckless Makefile, which is POSIX compliant, below. It has the same functionality as the proposed solution, modulo unnecessary debug targets you better solve using a -DDEBUG and adding -g to LDFLAGS, but is much shorter. It also allows you to easily create a tarball using “make dist” and handles manuals as well. It allows easy extension, for instance adding MAN5 if you have section 5 manpages and generally uses the naming convention that has established over the years (LDFLAGS, LDLIBS instead of LINK).

                                            To make it clear: This is not an attack against the author, but I’m sick of seeing ugly Makefiles in the wild. I hope this can be of inspiration for some people here.


                                            # example-program version
                                            VERSION = 1
                                            # Customize below to fit your system
                                            # paths
                                            PREFIX = /usr/local
                                            MANPREFIX = ${PREFIX}/man
                                            # flags
                                            CPPFLAGS = -D_DEFAULT_SOURCE
                                            CFLAGS   = -std=c99 -pedantic -Wall -Wextra -Os
                                            LDFLAGS  = -s
                                            LDLIBS  = -lpng
                                            # compiler and linker
                                            CC = cc


                                            # example-program
                                            # See LICENSE file for copyright and license details.
                                            include config.mk
                                            TARG = example-program
                                            HDR = header1.h header2.h header3.h
                                            SRC = src1.c src2.c src3.c
                                            EXTRA = LICENSE README
                                            MAN1 = $(TARG:=.1)
                                            OBJ = $(SRC:.c=.o)
                                            all: $(TAR)
                                            $(TAR): config.mk $(OBJ)
                                            	$(CC) -o $@ $(LDFLAGS) $(OBJ) $($LDLIBS)
                                            $(OBJ): config.mk $(HDR)
                                            	$(CC) -c $(CPPFLAGS) $(CFLAGS) $<
                                            	rm -f $(TARG) $(OBJ)
                                            	rm -rf "$(TAR)-$(VERSION)"
                                            	mkdir -p "$(TAR)-$(VERSION)"
                                            	cp -R Makefile config.mk $(EXTRA) $(HDR) $(SRC) $(MAN1) "$(TAR)-$(VERSION)"
                                            	tar -cf - "$(TAR)-$(VERSION)" | gzip -c > "$(TAR)-$(VERSION).tar.gz"
                                            	rm -rf "$(TAR)-$(VERSION)"
                                            install: all
                                            	mkdir -p "$(DESTDIR)$(PREFIX)/bin"
                                            	cp -f $(TARG) "$(DESTDIR)$(PREFIX)/bin"
                                            	for f in $(TARG); do chmod 755 "$(DESTDIR)$(PREFIX)/bin/$$f"; done
                                            	mkdir -p "$(DESTDIR)$(MANPREFIX)/man1"
                                            	cp -f $(MAN1) "$(DESTDIR)$(MANPREFIX)/man1"
                                            	for m in $(MAN1); do chmod 644 "$(DESTDIR)$(MANPREFIX)/man1/$$m"; done
                                            	for f in $(TARG); do rm -f "$(DESTDIR)$(PREFIX)/bin/$$f"; done
                                            	for m in $(MAN1); do rm -f "$(DESTDIR)$(MANPREFIX)/man1/$$m"; done
                                            1. 1

                                              While that’s a neat Makefile, any change in any header will recompile the whole project, which isn’t desirable. It also pollutes by putting all object files in the same directory as the source files, which I personally dislike, but that’s more a matter of preference.

                                              I will add targets for dist/install/uninstall.

                                              I’m interested in making it more POSIX compliant. I’ll see what I can do about that.

                                              1. 1

                                                Well, if you want to fine-grain you header management, you can always replace

                                                $(OBJ): config.mk $(HDR)

                                                with the explicit listing of dependencies.

                                                If you want to make it posix compliant, start off with removing the pattern-substitution rules that use “%”. It’s non-standard.

                                                1. 1

                                                  Is there a way to do an out-of-tree build with just a POSIX compliant Makefile? I’ve been on the lookout for that for quite a while.

                                                  1. 1

                                                    The typical approach would probably be to copy the makefile into the build directory.

                                              1. 2

                                                On page 86 it says:

                                                • Enable primarycache=all where working set exceeds RAM
                                                • Enable primarycache=metadata where working set fits in RAM

                                                And zfs’s manual page states:

                                                primarycache=all |	none | metadata

                                                Controls what is cached in the primary cache (ARC). If this property is set to all, then both user data and metadata is cached. If this property is set to none, then neither user data nor metadata is cached. If this property is set to metadata, then only metadata is cached. The default value is all.

                                                I was confused about this at first. On the surface it looked like if you have enough RAM you want to cache less. That didn’t seem right. I know neither PostgreSQL nor ZFS that well, so is my following reasoning about the recommendation above correct?

                                                If your data fits into RAM you don’t want to cache it with ZFS, but rely on PostgreSQL’s own caching. And if PostgreSQL can’t keep it all in RAM you should leave it to ZFS to do its best at caching the blocks, which PostgreSQL requests.

                                                1. 2

                                                  While I don’t know, my guess is that in the case of working set not fitting in RAM they are trying to minimize the cost of paging to disk by caching stuff. Also, they say before that, I think, to used Compressed ARC which lets you cache more in RAM than it seems Postgresql does (until it supports compressing its cache).

                                                  1. 1

                                                    Yes, I would guess that’s the reason. You can see on the following slides that in the primarycache=metadata case he’s also recommending allocating the bulk of system RAM to Pg shared_buffers which is where it keeps its own cache, and says (well, implies) that it’s to avoid having the same data uselessly in RAM twice. But if the dataset is significantly bigger, you might as well let ARC (and maybe even L2ARC) do its thing. And in that case, the caches will differentiate to some extent anyway; the truly hot data will stay in Pg’s cache, which means it won’t issue disk reads for it often, which means it will probably thrash right out of ZFS’s cache.

                                                    1. 1

                                                      Thanks for confirming, that I wasn’t on the wrong track with my thinking.

                                                  1. 1
                                                    1. 18

                                                      The video is much better (and not censored).

                                                      1. 3

                                                        I agree. I linked to it in the post as you can’t capture the emotion in text.

                                                        1. 3

                                                          “If you are in a cube farm using your brain for a living using your brain for a living, you are in the haves, not the have-nots. You are not in the demographic that voted Trump into office.”

                                                          First of all, the average Trump voter had an income of $72,000. That’s not poor. That’s management in most of the country.

                                                          Second, I disagree strongly with that statement. Now, I’ve been researching the 19th century because my novel is loosely steampunk. Yes, it’s better to live now than then. Fewer people die of health insurance in the 21st century than died of cholera or TB or black lung or mining accidents in the 19th century, but it’s horrible either way.

                                                          Still, I find myself agreeing more with Andrew (the man on Brian’s left) than with Bryan, even though I enjoy Bryan’s delivery style (it makes mine appear moderate). I tend not to fault the programmers who accept middle-class salaries so they can feed their families. This is an upper management problem.

                                                          I do generally think that 2017 is an underrated time to be alive (mediocre from a US perspective, but best-ever from a world perspective) and that tech people have an underutilized power that ought to be used for better purposes. I’ve been screaming at that cloud for years.

                                                          1. 4

                                                            I tend not to fault the programmers who accept middle-class salaries so they can feed their families. This is an upper management problem.

                                                            No. If you choose to do something, you are responsible for that action. It doesn’t matter who told you to do it.

                                                            You can argue that the action isn’t immoral, or that you don’t care about the damage, or that the benefit outweighs the harm, but the action is still yours. You own it.

                                                            1. 3

                                                              I agree with you, but it’s a question of relative harm. You can:

                                                              • fail economically and fuck up your kid for a long time, or
                                                              • do your best to survive a subordinate role in something evil (corporatism) knowing that you’ll be replaced if you don’t and it makes no difference.

                                                              I don’t fault people for choosing the second.

                                                              That said, I tend toward antinatalism. When you breed, you’re creating people that your employer (in many cases, global corporate capitalism) can hold hostage. I feel like global corporatism would be overthrown in a month if people didn’t have kids. That said, people do. And, once they do, it’s arguably better for the world that they prioritize raising their kids right (which is hard to do if you keep getting fired or get blacklisted) over moral purity.

                                                              The insidious aspect is the way that global corporatism tricks people into thinking they aren’t doing any harm. When you sign up, you’re in a brain-dead subordinate position where you really aren’t doing any harm, because you could be replaced with someone off the street who’d do the same job. Once you’re high up in the system, they’ve had years to make you one of them.

                                                              1. 2

                                                                But the choice between doing what you are expected to at work to keep feeding your family and quitting and therefore putting yourself and your family into an uncertain economical position is not an even one.

                                                                Now that doesn’t make ones actions somehow not immoral, but at least it make them understandable. If you want people to act with more integrity you need to address the reasons that hinder them to do so, instead of just leaving them alone with a heavily loaded choice.

                                                                Simply telling everyone “Thou shalt not steal.” does not work. We have been trying that for more than 2000 Years now. You need to address the conditions that makes one do so.

                                                          1. 2

                                                            Sor (Stream OR, get it?) is an rc script that reads a set of filenames from it’s input, and applies a set of tests to them, echoing those names that pass a test, discarding the rest.

                                                            I’m not too familiar with rc, but it looks like the script executes the given command once for each input line. I couldn’t get the script to run here. Therefore I use an equivalent script in sh instead to perform some quick tests on a Subversion checkout of the FreeBSD ports tree. The times given below are the ones towards which the commands converged after numerous runs.

                                                            First only find is used to find all the regular files inside the ports tree.

                                                            % env time find /usr/ports -type f -print > /dev/null
                                                                    1.86 real         0.12 user         1.74 sys

                                                            It does all the work with a single process and performs the fastest.

                                                            The second command is the sh version of the rc script. I would guess the rc version would perform similarly, since it’s doing the same thing.

                                                            % env time find /usr/ports | sh -c 'while IFS= read -r file; do if "$@" "$file"; then echo "$file"; fi; done' '' "`command which test`" -f > /dev/null
                                                                  181.15 real         0.10 user         1.06 sys

                                                            It’s about 100 times slower than the find equivalent.

                                                            It isn’t the fastest thing in the world, but it works, and seems to work pretty well.

                                                            Not “the fastest thing in the world” is quite an understatement. Spawning a new process for every input line is probably not very efficient. We can test this hypothesis by forcing find to do the same.

                                                            % env time find /usr/ports -exec test -f '{}' ';' -print > /dev/null
                                                                  221.08 real        61.76 user       160.86 sys

                                                            And indeed the performance is not too different from the previous command. So the excellent composability of the sh (and presumably rc) variant comes at a hefty price.

                                                            And last there’s stest, which comes with dmenu.

                                                            % env time find /usr/ports | stest -f > /dev/null
                                                                    1.88 real         0.10 user         1.03 sys

                                                            It’s basically as fast as the regular find version and demonstrates that it’s possible to move the application of the predicates to the filenames into a separate command in the pipeline while maintaining good performance. It nicely avoids naively spawning a new process for every input line.