Threads for z3bra

  1. 3

    Just don’t roll your own email.

    Any reason? I use MailInABox for almost 2 years without an issue.

    1. 1

      Getting outbound mails to be delivered to inbox is difficult. Because google seems too maintain an internal whitelist that is not human controllable. Not on the whitelist ? You go to spam, get bounced or devnull’d.

      I still do it though, because I think it’s important to keep trying.

    1. 24

      Good self-hosted software really needs to have leak-proof abstractions. Too many leaks means too much admin intervention, which is in short supply for hobbyists.

      Gitea is one that does this well IMO. A single binary, a single config file, and a single data directory are all key. Contrast this with my MediaWiki instance that needs a dozen packages installed, config is split between httpd.conf and LocalSettings.php, and data is split between the static files and database files. Not as bad as some, but still not ideal.

      1. 3

        Configuration woes are exactly why I’m considering writing my own web server instead of using Apache or Nginx. My needs are simple:

        • Static files only.
        • TLS (that kind of sucks but I don’t have a choice).
        • Content negotiation for languages (some of my posts are dual English/French).
        • Nice to have: cache everything in RAM (my website is that small).

        Then maybe one day I’ll stare into the SMTP abyss.

        1. 24

          You sound like a person who is yet to discover the warm convenience of https://caddyserver.com/

          1. 2

            I am indeed. Thanks for the tip.

          2. 4

            Using libretls makes using TLS painless.

            1. 3

              Nice to have: cache everything in RAM (my website is that small).

              Since you have a static site I’d assume that this is mostly handled by file system anyways, minus compression. I wonder how much one really gains from that, especially when using the right syscalls.

              Then maybe one day I’ll stare into the SMTP abyss.

              If you want a simple SMTP config OpenSMTPD is the way to go. See the examples section of their man page.

              Of course that doesn’t cover delivery (IMAP, etc.) or anti-spam measures. The good thing here is that it doesn’t change much.

              1. 1

                Then I’d advise going full OpenBSD and use opensmtpd, https and relays for the config simplicity and practical usage.

            2. 1

              Making self hosting easy would be very possible, but I think the amount of work it would take is just too much. For an ideal system, everything needs to be standardized. There needs to be some way to just browse a UI and click “install”.

              Yes I know there are many projects that advertise this, but none of them actually work well. It’s a monumental amount of work for the projects to patch and configure every service so it fits a set of standards for backups, SSO, webserver access, certificates, etc. And then last I checked these projects were not containerized so there were major issues doing things like OS updates because PHP/etc would update which would present major issues for running services.

              And then there is just no money in it to make it worth the effort.

            1. 3

              hk is pretty interesting per se ! And your knowledge of ffmpeg format conversion and post-processing is really valuable, and I’ll steal a good bit of it 😉

              I’ve been using a combination of wmutils and xrectsel to do something similar. I extend upon your solution by having the ability to record a randomly selected region of the screen, or arbitrary coordinates. I’m using different tools for that, which are more specific than simple xwindow output parsing (which is far from ideal IMO).

              • xrectsel lets you draw a region on the screen, and reports coordinates
              • slw / pfw (from wmutils) both report an X window ID (either selected by clicking it, or using the focused one)
              • wattr reports various window attributes (xywhb gives X, Y, Width, Height, Border width)
              • randr is a hacky tool I wrote to report the monitor size where the mouse cursor is (useful on multi-monitor setups)

              Here’s a showcase : ffmpeg-coordinate.webm

              1. 1

                Lovely and simple, as always. I’d love to steal that snip screenshot script off you ;)

                1. 3
                  #!/bin/sh
                  # require imagemagick, xrectsel
                  png=$(mktemp -p /tmp x11-snip.XXXXXXXX.png)
                  import -window root $png
                  convert $png -crop $(xrectsel '%wx%h+%x+%y') $png
                  printf '%s' "$png" | xsel # Optional, put image path in clipboard for convenience
                  display $png
                  
                  1. 1

                    Thanks! I tried searching for your rec script too but can’t find where you keep such things. Can you please share a link? I’m quite interested to see your script and learn how you do it vs the script in the article. Cheers

                    1. 1

                      I don’t share them online, that’s why you couldn’t find anything. The rec script is basically them same as the above, but using ffmpeg rather than convert. And OP’s ffmpeg and are much better in terms of quality than what I came up with.

              1. 5

                While this is true (arrays are multiple objects of the same type packed together, while pointers only store memory addresses), it doesn’t make much difference between their usage, which is why the shortcut “array are pointers” is usually taken.

                The only difference between them that I can think of (regarding usage, not semantics) are :

                • no pointer arithmetic can be done on arrays
                • actual in-memory object size differ (as reported by sizeof())
                char a[5] = "12345";
                char *p = a;
                
                printf("a => %d bytes\n", sizeof(a)); // 5 bytes
                printf("p => %d bytes\n", sizeof(p)); // 8 bytes (on amd64)
                
                printf("++p: %s\n", ++p); // 2345
                printf("++a: %s\n", ++a); // compilation error: cannot increment value of type 'char[5]' (clang 14.0.0)
                
                

                But IMO when you use any of these techniques (in-memory object size or pointer arithmetic), you should already know the difference between both.

                I’ve happily used this shortcut to teach pointers to newcomers in C. Arrays are a much simpler concept to understand than pointers, so once they got it settled and start dealing with pointers I leverage their knowledge of arrays to explain pointers. I mention that there are subtle differences of course, but also that they shouldn’t care about it just yet. And I think that anyone who ever tried to teach pointers to a student will agree with me there: pointers are complex enough on their own to bother mentioning the differences I stated above.

                1. 6

                  sizeof isn’t the only footgun. Consider:

                  char foo[] = "12345";
                  char *foo = "12345";
                  

                  Modern compilers will warn on the second of these. The first is allocating new storage in whatever scope you are writing in and copy that string. The second will store the string in read-only storage and give you a pointer. Generally, on a vaguely modern OS, the string on the second line will be in the read-only data section and will trap if you try to write to it. On older systems and some systems without an MMU, it will not trap, but if you do write to it then the next bit of code that tries to operation on the same string literal will be surprised to discover that it’s been modified.

                1. 9

                  As far as I understand, this thing stores secrets as encrypted files on disk, a daemon sits on top of those files, and clients call the daemon to read and write. The daemon has a notion of being locked or unlocked, and must be unlocked before secrets can be accessed. But it seems like that state isn’t per client or per session but rather global, meaning client A can unlock the daemon and disconnect, and then client B can read secrets without needing to unlock?

                  If that’s the right reading, I don’t see how it makes sense. But maybe it’s not the right reading.

                  1. 2

                    https://git.sr.ht/~sircmpwn/himitsu/tree/master/item/cmd/himitsud/cmd.ha#L76 seems like the right reading… though honestly it seems like you could “quickfix” this by having a client pass in a random UUID as their ID (though you’d want to share that around for certain use cases…)

                    This kinda reminds me of going in real deep into sudo and thinking about how that ends up working (well at least the standard plugin setup where it stores processes that successfully authenticated).

                    1. 1

                      client A can unlock the daemon and disconnect, and then client B can read secrets without needing to unlock?

                      Not exactly. From what I understand, the daemon spawn a predifined “granting” program, that the user must accept in order to provide the secret.

                      So client A asks for a a secret, and user must first unlock the store with their passphrase. Then the passphrase (or rather, the key derived from it) is stored in memory by the daemon. When client B asks for a secret, the daemon spawns a dialog asking the user to grant access to a secret from client B. If the user accepts, the daemon replies with the secret.

                      This is IMO the same thing as ssh-agent or gpg-agent, which hold the key in memory and simply provide it to whoever can read the socket file.

                      1. 1

                        Not exactly. From what I understand, the daemon spawn a predifined “granting” program, that the user must accept in order to provide the secret.

                        Interesting. I’m not familiar with this architecture. Can you point me to the bit in Himitsu which does this? What does it mean for a user to “accept” a granting program?

                        So client A asks for a a secret, and user must first unlock the store with their passphrase.

                        Is a client the same as a user?

                        When client B asks for a secret, the daemon spawns a dialog asking the user to grant access to a secret from client B. If the user accepts, the daemon replies with the secret.

                        What does it mean to say “the daemon spawns a dialog”? Does the daemon assume that all clients are on the same host?

                        This is IMO the same thing as ssh-agent or gpg-agent, which hold the key in memory and simply provide it to whoever can read the socket file.

                        Ah, okay, key bit is here, I guess:

                        const sockpath = path::string(&buf);
                        const sock = match (unix::listen(sockpath, net::sockflags::NOCLOEXEC)) {
                        

                        It binds only to a Unix domain socket, which by definition is only accessible from localhost. Then I guess we’re at the second branch of my confusion block ;) in a sibling comment, namely

                        If connections can only come from localhost, then I can’t quite see why you’d use a client-server architecture in the first place — AFACT it would be simpler and safer to operate on the filesystem directly.

                        Or: what makes the ssh-agent architectural model for secret management the best one?

                        1. 2

                          Check out the himitsu-prompter(5) manpage for info on the granting program. I didn’t use it at all, and I just read all the documentation because I find it quite interresting. To answer all your questions, the daemon is configured to spawn a program, the prompter, whenever a client (external program) request access to a secret. This program must ask the user for permission to provide the secret (basically a grant/deny dialog window). Once access is granted, the requesting client gets a reply from the server.

                          A client is any application that can write to the Unix socket created by the daemon (permission is hereby granted by the user spawning the himitsu daemon.

                          Regarding the Unix socket, my guess is that the project is aiming toward a possibility to use the program over the network using TCP, given that it’s heavily inspired by the factotum server on plan9 (which is basically the same thing, but over the net via the 9p protocol).

                          1. 1

                            the daemon is configured to spawn a program, the prompter, whenever a client (external program) request access to a secret. This program must ask the user for permission to provide the secret (basically a grant/deny dialog window)

                            If the daemon accepts requests for secrets on a Unix domain socket, then there is no way for it to know that a dialog window is observable by the client. The client can be on a different host, and its requests shuttled thru a proxy that listens on TCP and sends to the local Unix socket.

                            If “locked/unlocked” is a state that’s per-session, or per-connection, then no problem! But I don’t see anything which indicates that’s the case. It seems to be a state that’s per-daemon, which I understand as a singleton per host. Is that true?

                            1. 2

                              The client (say, a mail client than need access to your IMAP credentials) performs the request via the Unix socket. Upon receiving that request, the daemon itself spawns a runtime configured program (say, /usr/local/bin/himitsu-prompter) and talks to it over stdin/stdout (not a Unix socket this time). Which means that the dialog must run on the same host as the daemon. Then this program can either spawn a dialog window, send a push request to a phone or even ask for a fingerprint check to confirm acceptance by the user. If the program (which is a child of the daemon) returns 0, then the request is accepted, and the daemon delivers the secret to the requesting program (the mail client here). Otherwise, the request is denied.

                              You’re right about the store state though, it is locked per-daemon. There can be multiple daemon running on the same host though (like, one per user). There is no knowledge of “session” here, which could be a problem to spawn the prompter program for example (eg, to retrieve the $DISPLAY variable of the current user). But I’m confidemt that improvements will be made over time to improve this situation, like a session-bound prompter for example.

                              1. 1

                                Thanks a bunch, this was a great explanation and I learned some things I didn’t know before.

                                If I understand correctly, the daemon receives requests for secrets from clients that connect over a Unix domain socket. Requests cause the daemon to spawn a child process, which it communicates with over stdin/stdout. The daemon will issue commands, basically requests, to the child process, based on the details of the client request, as well as its own internal “locked” or “unlocked” state. The child process is assumed to communicate with “the user” to get authentication details like passwords, and a successful return code is treated as consent by “the user” to authorize the client request.

                                If that’s basically right, then I get it now. It’s a secrets storage manager — for desktop users, managing secrets on the local filesystem. That’s fine! But when I read “secrets storage manager” I think HashiCorp Vault, or maybe Bitwarden; definitely not ssh-agent ;) which was the root cause of my confusion.

                                edit: I guess I’m still confused by the design, though. Interacting with this system requires clients to speak the IPC protocol, which requires code changes in the client. If the receiving daemon is only ever accessible on a Unix domain socket, and therefore that clients will only ever connect to servers on localhost, then, by definition, the clients should have access to the same filesystem as the daemon, right? In that case, it’s not clear to me what value the intermediating daemon is providing. Couldn’t the clients just as easily import some package that did the work of the daemon, spawning the prompter child process and reading the secret data directly from the filesystem, themselves? I guess I see some value in that the daemon architecture avoids the need for programming-language-specific client libraries… is there something more?

                                1. 2

                                  The process you describe is correct. This is indeed very different from Vault. It’s more of a password manager that clients can interact with to fill in login credentials.

                                  Regarding your idea about using a client library, this would cause a major issue: the client would then have the ability to unlock and read the full keystore, as you’d provide the master password to every single application, and trust them to only read the secrets they need. This would also require to provide the master password to each new application, as there wouldn’t be a « master process » running to keep the keystore unlocked (or rather, in « softlock » state as the documentation puts it. And as I stated earlier, I can see this program move toward using TCP as well for client requests, given its similarities to factotum(4). The latter is an authentication daemon that can be queried over the network to authenticate users over a network, like you’d query an LDAP server for example.
                                  I think this would require a bunch of changes to the daemon though to run as standalone mode, like TLS encryption over TCP, and possibly the ability to « switch » from one keystore to another to be able to provide secrets from multiple users. This is risky though so I think that for now, usage in local mode only is more than enough for an initial release.

                      2. 1

                        My current setup with pass and GPG behaves in the same way. Why is this a problem, in your opinion?

                        1. 4

                          Say I have two sandboxed applications, one of which I want to grant keyring access to and the other of which I don’t. Perhaps the second doesn’t have any legitimate reason for keyring access and it asking for keyring access is going to make me suspect it’s compromised.

                          It would be useful to be able to gate access for these apps differently, without the legitimate app unlocking the keyring “for” the illegitimate app.

                          1. 1

                            This is exactly how it works. Applications to not get granted keyring access directly, they query the daemon which then asks permission to the user to provide access to a specific key, for a specific application. This is referred to as « softlock » mode in the documentation. So if am illegitimate application requests access to a secret, they won’t get anything without the user’s approval.

                          2. 3

                            What prevents an arbitrary third party from connecting to the relevant server, issuing a request (without any sort of authentication or authorization) and then receiving sensitive information?

                            If connections can come from multiple hosts, then I can’t quite see how a single shared security context for all of those users isn’t a fatal security risk. If connections can only come from localhost, then I can’t quite see why you’d use a client-server architecture in the first place — AFACT it would be simpler and safer to operate on the filesystem directly.

                        1. 19

                          I’ll ignore the rant which is mostly “I want my techno-utopia and ignore we live in a society”, but just wanted to raise some things which are just wrong:

                          What people want for? Encryption? Then enable it by pointing to https://!

                          Also privacy, but both have next to no guarantees, with a self-signed certificate because anyone can replace it if they can interact with the traffic on the wire.

                          Possibly we use some overlay network like Yggdrasil, where TLS is just pointless.

                          This is mixing up network layers. Whether the routing to your network is encrypted is separate from whether you’re authenticating your service endpoint. Those may be handled on the same server next to each other, but that’s the closest they get.

                          Some browsers used OCSP, that literally leaks your intentions about visiting different entities to third-parties in real time.

                          Browser-side OCSP querying is dead. Turn on the must-staple option you can handle it completely server-side with no information leak.

                          and now we expect them to regularly synchronize CT’s Merkle Trees from various independent sources?

                          CT is not for the end user. It’s for audits and holding the CA accountable by parties which care about the trust network.

                          1. 3

                            Yggdrasil does “authenticate” the endpoint, that’s a feature of the overlay it provides: the public key is derived from the IPv6 address of the server, so if the endpoint can decrypt the traffic, it means you reached the correct server. This doesn’t account for virtualhosts though, so it’s more limiting than HTTPS.

                            1. 4

                              I mean endpoint as in service endpoint, not as in IP. But even for IPs, yggdrasil allows you to advertise a prefix and authenticate it at the router. What actually happens within that prefix, or on specific ports of a host is outside of yggdrasil’s authority.

                              So the network auth loses visibility of ports, hostnames, and potentially actual machines handling the requests. We tried to go hard on network separation for security some time ago - there are good reasons we’re pushing for zero-trust these days instead

                              1. 2

                                I believe you have that backwards, the IPv6 address is derived from the public key not the other way around.

                                Depending on how you use it, it only derives 64 bits of the IPv6 address (since it assigns you a /64), or I suppose actually only 57 bits (because all IP’s in the network share the first 7 bits) - which probably isn’t enough to provide secure authentication.

                                1. 1

                                  There’s not much you can do about this. If you want to support SLAAC, then you need to hand out /64s for a LAN. The prefix takes out the remaining 7bits so you have 57 bits. Given an equal choice (which I realize isn’t the case here), I’d much rather have IP-level encryption than the complicated handshake/dance of HTTPS.

                            1. 2

                              This is very interesting, and that’s a big can of worm as I see it. I don’t see this technology become a new authentication factor, because there’s a lot of way your “behavior” can be altered. Say I cut myself with an envelope, my typing behavior will certainly change a lot. Or if I use my phone rather than laptop, or a keyboard with a foreign layout… Imagine being in a foreign country, and being unable to withdraw some cash because “your typing behavior is unusual” (that would make a cool popup though!). That would suck…

                              Now to push the author’s idea even further, you could probably write a similar “typing delay” code for programmable keyboard firmwares like QMK. Keyboards would then be seen as next-gen privacy devices that work independently of the system you’re typing on. This sounds pretty scary…

                              Edit Just realised that the article is from 2015. I wonder how these companies have been doing for the last 6 years. Did this technologies become more popular ? The fact we don’t hear much about them can be both relieving or plain scary…

                              1. 2

                                I don’t think this is going to replace passwords, where I see the main use for this is user tracking via fingerprinting.

                              1. 2

                                This is very well done. Explaining all the checks step-by-step is definitely a good way to help people understand this tedious and complex process that is validating email senders.

                                There seems to be a bug with DKIM key retrieval though, because it states that my email doesn’t pass DKIM verification. However, it does pass it successfully on https://mail-tester.com. This could be a problem in the DNS record parsing, as I formatted mine with multiple chunks enclosed in “” (for the multi-line public key).
                                Now I’m genuinely curious to know if that’s a bug in the tester (which I hope!), or if my emails would eventually be dropped by some other mailers because of that formatting. Would anyone have an insight on this ?

                                1. 2

                                  I got the same error - it claimed that no DKIM was present, even though it is (and parties like Google seem to accept it).

                                  1. 0

                                    I got a bug where I’ve used a:mail.example.com in my SPF policy, and the IPv6 address I sent from doesn’t match according to the tester. The mail is still accepted due to DKIM (I don’t have the problem you mentioned).

                                    I didn’t try if it works better with IPv4, but it seems there are some disturbances in the force.

                                    So apparently there was something wrong with my SPF policy after all; it had a syntax error. But learndmarc just told me that my mail matched -all, so it just hopped over tokens it didn’t understand instead of telling me my SPF policy was syntactically wrong. When you press the looking glass next to the domain on the right side, you get sent to a page that checks your SPF policy and shows the error.

                                  1. 1

                                    I’ve been using Yggdrasil for more than a year now, and I love it. All my servers and my workstation are connected to the network, and I use it to ssh between them.

                                    The coolest aspect of yggdrasil is IMO the built-in encryption at the network card level. Knowing that any p2p connection is fully encrypted AND authenticated is a huge step forward regarding full encryption, and it puts the encryption where it should be: at the link level rather than application. This means that older protocols like telnet, smtp, gopher, irc, … are all fully encrypted now, and there is no need to bother with when implementing them.

                                    My only real question about it, is as follows:

                                    Assuming Yggdrasil becomes a thing, and replaced the clearnet. How would ygg nodes peer with each others ? In the current implementation, Yggdrasil need an established network (either ipv4 or ipv6) to setup the peering betwen nodes, before they can start communicating.
                                    Would it be possible to simply cut that part, and have Yggdrasil directly assign ipv6 addresses to the network card, and communicate directly with other nodes ?

                                    1. 2

                                      Mesh networks have always been intriguing me, but they never seemed to work/scale all that well. It sounds like Yggdrasil might actually do well.

                                      Up to version v0.3.13 they had the IfTAPMode option to create a TAP interface. My guess is that you could’ve used that to bridge with a physical adapter. That way your network card would get a Yggdrasil based ipv6 address. It could discover Yggdrasil on your local Ethernet using NDP, which was implemented.

                                      As a way to replace the current IPv6 internet you can imagine that the modem/router supplied by your ISP runs a Yggdrasil node, and all the devices on the local Ethernet run one too. The ISP is also a Yggdrasil node that your router connects too. Now you’ve replaced the traditional IPv6 internet with a Yggdrasil IPv6 internet. Of course in practice ISP’s wont support Yggdrasil.

                                      But the nice thing is it’s just all 1 flat network, there are no routing tables, so anyone can add links/peers and the entire network can make use of that. So you could envision your WiFi access point peering with your neighbors, and theirs with their neighbors, and so on to create a giant mesh network.

                                      As cool as it is I do have 2 reservations:

                                      • As far as I can tell it has never been tested against people adding bad routes, on purpose or by lack of knowledge. For example someone peering from their home connection with their 2 VPN server on opposite sides of the planet. It seems very tempting to do because then you have a “direct” connection to your servers instead of going though several Yggdrasil peers. Except that connection still goes through multiple hops over the traditional internet, while the routing in Yggdrasil assumes you are adding direct wired/radio links. It sounds like you could severely degrade the network performance this way, from reading their blog post Practical peering.

                                      • The public keys are truncated to 64 bit for seemingly no good reason. IPv6 is 128 bits. They have to use a 32 bit prefix in order to not conflict with normal IPv6 internet usage. But then they supply a 64 bit network to each node? Why? The only reason given is that you might want to connect low powered devices that can’t run their own Yggdrasil node to the network. Who is going to connect 2^64 low power devices to a single Yggdrasil node? My guess is that it’s to allow the low powered devices to pick a random address rather then having to assign one. But this seems like such a rare use case to me that I would have rather seen this address space reduced to 16 bit, and have the public key truncated to just 80 bits, which seems a lot more secure.

                                      1. 1

                                        Thanks for the explanation ! I didn’t think about NDP to discover other nodes, it makes quite a lot of sens indeed.

                                        As far as I can tell it has never been tested against people adding bad routes

                                        I read that Yggdrasil uses a spanning-tree for the routing table, which, as I understand STP, it implies that if two routes lead to the same network, one of them will be disabled in favor to the other.

                                        The public keys are truncated to 64 bit for seemingly no good reason

                                        My guess here is that it’s pretty “common” among service providers to give out full /64 to their customers, so they went with the same idea here.
                                        Keep in mind that Yggdrasil is still a proof-of-concept, so they don’t need to “save” IPv6 addresses. If it ever gets adopted, it’ll probably be reworked against “practical” use-cases, and eventually grow the size of the keys (or make it variable in size, maybe?).

                                        1. 1

                                          I read that Yggdrasil uses a spanning-tree for the routing table, which, as I understand STP, it implies that if two routes lead to the same network, one of them will be disabled in favor to the other.

                                          Not much of a mesh network then?

                                          1. 1

                                            Indeed, but Yggdrasil was never meant to be a mesh network in the first place. I agree that on this part the article is misleading.

                                    1. 2

                                      I use an planck (40%), and got a pretty standard layout setup: one for numbers, one for symbols. The “cool” feature that I added was a way to change the base layout to “emulate” my base layout when the system is configured using a different layout. My mapping of choice is the “azerty AFNOR” (an optimized azerty layout, which is fairly uncommon). If I plug my keyboard to a QWERTY based computer, I simply press a key and can use my keyboard just like before. For example, here is my “QWERTY” layout:

                                      [_QWERTY] = LAYOUT_planck_grid(
                                              _______, KC_A,    KC_Z,    KC_E,    KC_R,    KC_T,    KC_Y,    KC_U,    KC_I,    KC_O,    KC_P,    KC_MINS,
                                              _______, KC_Q,    KC_S,    KC_D,    KC_F,    KC_G,    KC_H,    KC_J,    KC_K,    KC_L,    KC_M   , US_SLAR,
                                              _______, KC_W,    KC_X,    KC_C,    KC_V,    KC_B,    KC_N,    KC_DOT,  KC_COMM, KC_COLN, KC_SCLN, KC_RSFT,
                                              _______, _______, _______, _______, _______, US_NUM , US_SYM , _______, _______, _______, _______, _______
                                      ),
                                      

                                      I have the same remapping done for “traditionnal” AZERTY (but it mostly acts on symbols). This has saved me a lot of time, and whenever someone connects to a VMWare server through the console, they ask me to type things in because I’m the only one that can type stuff in QWERTY without having to look at a cheatsheet !

                                      Another “fun” stuff I added (though it’s more due to the lack of keys I must admit!) is to put the Fn layout as a “sub-layout” of the number one. When I’m in the number layout, holding “F” will put me in the Fn layout, and replace each number with the corresponding Fn key. So pressing F8 is raise + f + 8. Easy to remember !

                                      1. 6

                                        You can run a little artisanal one and feel happy about it, but it will not at all measure up to the quality of systems run by eg Google and Microsoft.

                                        This does apply to basically every techno out there: Web servers, DNS, container platforms, online storage, backups, visioconference, … you name it.

                                        The “artisanal self-hosted” versus “huge, company-owned” software stack has been the case for many years now, it has never been “increasing” IMO.

                                        1. 3

                                          Yeah, I wanted to say something similar. I think the discussion here applies to everything but in different ways and I don’t know why people seem to mostly discuss it for email.

                                          IM is way harder than email. You basically have XMPP which I use a lot but it’s basically shite or …nothing. You can do IRC which works okay but without most of the features or you use one of the big services.

                                          Calendar sharing, Nextcloud sort of works. But only sort of.

                                          Web servers are pretty doable but also way harder to set up than should really be necessary and let’s be honest, who even needs a webserver anymore?

                                          The list goes on. Maybe this post would be more interesting with a list of cheapish well managed services for people who at least want to move away from the “I am the product”-space.

                                          1. 6

                                            Web servers are pretty doable but also way harder to set up than should really be necessary and let’s be honest, who even needs a webserver anymore?

                                            lost me here, I thought loads of people were still running their own little webservers.

                                            1. 1

                                              Yes, I do too but mostly for other services: e.g. webmail, gittea, nextcloud. For just hosting a personal website it’s kind of overkill.

                                            2. 5

                                              You basically have XMPP which I use a lot but it’s basically shite or …nothing.

                                              There’s also Matrix these days, in case you missed it.

                                              1. 2

                                                True! It’s not better though.

                                                1. 4

                                                  I have to disagree on that account. I was previously a heavy XMPP user and the smoothness of Matrix enables usages several classes above of what I was able to achieve with XMPP.

                                                  Which aspects to you consider lacking?

                                          1. 4

                                            I wrote a similar tool to sign/check files in a pipeline: sick.

                                            It requires a public key to verify the signature though, but works in a similar way:

                                            curl $URL | sick | sh
                                            

                                            The only caveat is that it puts the whole file into memory for now (though bufferizing it on disk would be easy to do). If the ed25519 signature shipped with the file doesn’t match, then nothing is printed to stdout. If it matches, the content (without the signature) is ouputed to stdout.

                                            1. 3

                                              Maybe just curl from ipfs.

                                              As long as the key is the correct, you’ll always get the same data. About as good as a download link next to a hash.

                                              1. 3

                                                Even better: ipfs get from ipfs ;)

                                                But yes, curl from a public gateway is a close second

                                                1. 2

                                                  Ooh, interesting. Hadn’t thought about IPFS as a solution.

                                                  Use case is a bit different/nuanced though. I wanted something where I could insert some sort of verification string prior to running that would be trivial for the author to also include as a part of a release.

                                                  Since IPFS doesn’t quite fit that description, it doesn’t feel like the right solution, but you did remind me that I should give it another look.

                                                  1. 2

                                                    An improvement but afaics that still means ultimately trusting an external entity (ipfs infrastructure) versus a locally calculated checksum.

                                                    I haven’t looked very close at ipfs yet, it’s on my list as part of my archiving endeavours.

                                                    1. 1

                                                      There’s no need to trust the “ipfs infrastructure”, just the client implementation. Content keys are generated from secure hashing the content itself.

                                                      1. 2

                                                        If you use a client, sure; I presumed you meant curl https://ipfs.io/…

                                                    2. 2

                                                      But then the script needs an IPFS client, which is, too, vulnerable to this. Unless you mean hitting on a specific server, which can be manipulated as well (one of my friends actually did that, for a prank).

                                                      1. 2

                                                        Unless the download somehow fail in the middle ? Take the following script:

                                                        #!/bin/sh
                                                        curl -o https://random.stuff/archive.tbz
                                                        tar -C $HOME/.cache -xJf archive.tbz
                                                        cp /$HOME/.cache/archive/blah /usr/bin
                                                        rm -rf $HOME/.cache/archive
                                                        

                                                        Pretty simple, and downloading it from ipfs would work, but if the server chokes, and stops transmitting data at rm -rf $HOME, then the script will just cleanup your home directory without warning. You got the script from the correct URL though. So checking the hash (or best, signature!) after the download is complete remains a better option.

                                                      1. 13

                                                        I might be truly paranoid but using a tool labelled as “secure” by NSA and offered to the public looks like an attempt at pushing a backdoor to the public. This reminds me of Operation Trojan Shield held by the FBI.

                                                        Has anyone done a full audit of this code somewhere ? If it’s “legit”, it could be nice to know where they seed their randomness from to ensure the security of the result.

                                                        Edit: Found my answer in the README :

                                                        The foundation of RandPassGenerator is an implementation of the NIST SP800-90 HashDRBG. It uses entropy, carefully gathered from system sources, to generate quality random output. The internal strength of the DRBG is 192 bits, according to NIST SP800-57, using the SHA-384 algorithm. In accordance with SP800-90, the DRBG is seeded with at least 888 bits of high quality entropy from entropy sources prior to any operation.

                                                        This implementation uses the seed mechanism of the Java SecureRandom class for gathering entropy.

                                                        1. 1

                                                          Does anyone know how to stop tmux from breaking my terminal’s scrollback? My terminal is a perfectly good terminal, I don’t want tmux to pretend to be one I just want it to provide me with persistent sessions. It has a really annoying thing where it decides to truncate the output so if I cat a file to have a look at it in my terminal, I see only the end of the file and have to use tmux’s own scrolling (which doesn’t play nicely with my terminal’s scrolling and, in particular, doesn’t let me select more than one screen full of text to copy and paste). I’ve mostly switched to using abduco, but tmux is installed on more systems and abduco has a couple of bugs that are annoying.

                                                          1. 2

                                                            If all you need is session persistence, take a look at abduco instead. It provides solely the detach/attach ability of tmux, and doesn’t mess with (at least mine) terminal scrollback.

                                                            1. 1

                                                              The last line of my post was:

                                                              I’ve mostly switched to using abduco, but tmux is installed on more systems and abduco has a couple of bugs that are annoying.

                                                              In particular, in the Windows Terminal, abduco doesn’t seem to play nicely with readline and ends up with backspace deleting things but new characters then overwriting things to the right (sometimes - I haven’t figured out why and when it happens). tmux doesn’t have that problem.

                                                              1. 1

                                                                Indeed I read that a bit too fast. Perhaps this bug has to do with your $TERM env variable and terminfo. Just a quick idea.

                                                          1. 1

                                                            Been doing that too for my programs, using an homemade script. (See the README and website). I run it off my server’s git hooks.
                                                            The part I like the most about this “practice” (exposing the bare README as a website) is that it forces me to write “better” README, and actually take the time to sit down and put useful information inside it.

                                                            In essence, my script mostly does markdown README | cat header.html - > index.html everytime I git push to the repo. I then tailored this script to my specific needs to add some sugar to it (generate manpages and link to them, add a link to the LICENCE file, link the repo, tarball releases, …).

                                                            I’m not a fan of your solution however, as its very reason to exist is because of Github limitations (if I understand it correctly). It could also make the “plaintext” version of the README very messy or confusing to read, with the “github conditional blocks” being still displayed.
                                                            But that’s just me being grumpy, and it’s pretty cool that you decided to base your website of your project’s README, so keep going ! Many people host their repo on Github, and if that can make them write better README and start hosting their webpages by themselves, I’m more that happy to see it succeed and gain traction ;)

                                                            1. 1

                                                              It could also make the “plaintext” version of the README very messy or confusing to read, with the “github conditional blocks” being still displayed.

                                                              That’s a fair point, especially with the front matter put at the beginning. A solution might be to have a template README.md.tpl with the annotations, that would pass through riss to create the webpage and a second awk script, very similar to riss, could be used to remove these annotations from the README.md.tpl to create a README.md (all of this could be automated with hooks). What do you think?

                                                            1. 2

                                                              This is a great explanation of DSR. However it omits some other limitations or advantages involved that are, IMO, worth mentioning.

                                                              Keep client source IP

                                                              Using DSR means that the client IP address will be left untouched by the LB. This can be useful in some cases, where the backend needs to know the original client IP rather than that of the LB (for example for source IP checks).

                                                              Frontend IP must be in the backend network

                                                              Because the VIP of the LB must be set on the loopback of the backend servers, it means that this VIP MUST be in the backend network. This can be a very limitating if you want to put the VIP in a different network (eg, a public IP).

                                                              I’d say that these kind of setups should only be used for very specific use cases, because of how limiting and “rigid” it can be to setup and maintain.

                                                              1. 8

                                                                I had no idea they were different! I always thought SFTP was just a fancy name for scp. Turns out SFTP is an SSH protocol standard.

                                                                1. 10

                                                                  Yes they are pretty different, I wrote about it here https://rain-1.github.io/use-sftp-not-scp.html

                                                                  1. 3

                                                                    I see you are also against rsync. Is there alternative that would use similar protocol for incremental update that would have better implementation?

                                                                    1. 2

                                                                      Maybe reclone

                                                                    2. 3

                                                                      Thanks, looking at its interface is all I need to know I don’t ever want to use the sftp tool. That interface is horrible.

                                                                    3. 3

                                                                      I thought scp was just a command line tool to transfer files over sftp. Looks like it is that now. What did it use before if not sftp?

                                                                      1. 6

                                                                        scp used SCP

                                                                      2. 2

                                                                        An additional learning that blew my mind is that SFTP is actually very much used in big corporations!

                                                                        It is used widely in Finance and Healthcare afaik. There are wish to more away from file based protocols but it will take some time!

                                                                        1. 3

                                                                          An additional learning that blew my mind is that SFTP is actually very much used in big corporations!

                                                                          I recently bought a Brother printer / scanner. The scanner has an option to upload results via sftp, with a web-based GUI for providing both the private key for it to use and the server’s public key. It was very easy to set up to scan things to my NAS, where I wrote a tiny script that uses fswatch to watch for new files and then tesseract to OCR them.

                                                                          I was very happy to see that it supported SFTP. The last printer / scanner combo thingy I bought could talk FTP or SMB, but a weird version of SMB that didn’t seem to want to talk to Samba.

                                                                          1. 2

                                                                            The product made by company I work for handles a lot of data being transferred in flat files. Many customers have “security checklists” that identified FTP as an insecure protocol and recommended SFTP instead.

                                                                            I used to mock file based data transfer but compared to stuff like getting data via JSON APIs they have a lot of life in them still…

                                                                            1. 2

                                                                              You mention JSON APIs; but you can have JSON APIs over SFTP, so I guess you meant REST APIs instead.

                                                                              As far as I understand, the main issue with file based data transfer with SFTP is that there’s no support for upload completion in any way.

                                                                              E.g.: if client 1 uploads a file to the server for processing, then, how does the server knows the file upload is completed?

                                                                              This is often worked around by changing the name of the file(using the SFTP rename command), or uploading a hash too, or the file name is the hash, etc… all this is pretty clumsy compared to how HTTP handles that.

                                                                              1. 2

                                                                                Correct, I meant REST APIs (often returning JSON, but can return XML too).

                                                                                There are a lot of issues with file based transfer, including stuff like completeness (can be mitigated by including a defined footer/end of file marker) file names, unannounced changes of format and so on.

                                                                                But you can shuffle a lot of data in a short time by zipping files, the transfers can be batched, and the endpoint generally doesn’t need a ton of authentication infra to ensure that unauthorized access is prevented etc. Push vs. Pull.

                                                                                In the long run returning data over a API endpoint is The Future, but SFTP is basically a small upgrade to FTP which enables transport security without a ton of other changes.

                                                                                1. 1

                                                                                  It’s a bit unclear here if you’re talking about SFTP or FTPS…

                                                                                  1. 2

                                                                                    SFTP.

                                                                                    I don’t mean it’s a drop-in replacement, but as a part of a system where you have 2 systems communicating using files, updating the transport mechanism from FTP to SFTP is a small step compared to converting the entire chain to an API-based solution.

                                                                              2. 1

                                                                                What bother me about SFTP over FTPS (as a replacement for FTP), is that you need to allow ssh trafic from your client to your server. It also means providing a real account for the client on the machine, while FTPS is just as secure and can make use of virtual accounts and a different port than SSH by default.

                                                                                1. 2

                                                                                  There’s nothing about the SFTP protocol that doesn’t allow for virtual users or other port numbers.

                                                                                  1. 1

                                                                                    Sure the protocol allows it, but as far as I know, openssh doesn’t support virtual users. So you’d need to install another server (say vsftpd), and at this point, why would you run sftp over ftps ?

                                                                              3. 1

                                                                                Yes, I work in the data space and sftp connectors usually come up right after cloud stores. A lot of companies use it, it is even supported by hadoop. It seems to have replaced ftp/nfs is a lot of corporations.

                                                                              4. 2

                                                                                I think scp was basically rcp over ssh rather than rsh/rlogin.

                                                                              1. 13

                                                                                Everyone I know has been on Telegram/WhatsApp for years, I really don’t see any interest in carrier-based messaging coming back.

                                                                                1. 3

                                                                                  Same. I’m in Europe, and know nobody who uses SMS regularly, or even iMessage.

                                                                                  1. 3

                                                                                    Ironic how they brought that upon themselves by being so stingy with SMS pricing

                                                                                    1. 5

                                                                                      You can also use stickers in Telegram, have group chats with moderation, have avatars, talk to people without giving them your phone number, and so on. It’s like IRC vs Discord. Even in a world where SMS was free from day one I don’t think it’d last.

                                                                                      1. 2

                                                                                        That’s true, however SMS is still noticably more popular in NA than Europe. And also their greediness applies to MMS too, which had many more features than pure text but instantly died due to being actually more expensive than physical mail.

                                                                                      2. 4

                                                                                        I did a calculation about 20 years ago that the price for SMS was over £500/MiB. It hasn’t changed very much unless you are on an unlimited-SMS plan. It was cheaper to send a fax to antarctica than send a page of text via SMS, the pricing was insane. On the plus side, the price for data is far more reasonable so even a protocol where the overhead for short messages is a few thousand percent is much cheaper than SMS. I’m using Signal for pretty much everything that I used to use SMS for and now that it’s basically free (it is free on WiFi), I use it a lot more. It also helps that Signal supports clients on multiple devices, so I can use the desktop app when I’m sitting at a computer with a real keyboard and only use the mobile version when I’m out. SMS is intrinsically tied to a single endpoint, which was fine in a world where people owned a single device but when people use a phone, a tablet, a work computer and a personal laptop it just doesn’t work.

                                                                                    2. 1

                                                                                      It bother me that I must use telegram for one group, whatsapp for another and signal for the last one. Each of these messaging apps worked on an application rather than a standard. It looks like that is what RCS is, so if I could use a single app for all messaging, I’d be happy 👍

                                                                                    1. 2

                                                                                      I’ve been using theae for so long that I can’t remember what life was without them:

                                                                                      • abduco, the “detach” feature of tmux extracted in its own tool. I run long commands in it so I can close the terminal and leave it running in the background.
                                                                                      • sponge from moreutils to do inline editing for tools that don’t support it: tr a-z A-Z < file.txt | sponge file.txt
                                                                                      • pick Interactive fuzzy selector (similar to fzf) but with less bells and whistles. I use it to interactively pick hosts to ssh into from my known_hosts and /etc/hosts files.

                                                                                      I’ll also jump on the occasion to advertise a bunch of my own creation, that I use daily as well:

                                                                                      • safe a password manager like pass, but using a master password rather than GPG keys. Targeted at people like me that are too scared of loosing their GPG key, amd don’t want to deal with GPG keys management.
                                                                                      • pm, as stupidly simple package manager (unzip tarballs to $ROOT, and writes the content to a file for later removal/upgrade). I use it to install unpackaged tools to /usr/local while keeping track of what’s installed (just a better alternative to make install).
                                                                                      • human, a tool to convert numbers into human readable format.