1. 8

    As an alternative to the Google search engine, I suggest Startpage. They use Google’s search as their backend, but send none of your data to Google. It’s super convenient, and provides much better results than DDG.

    As for email, I’ve been very happy with ProtonMail. Been a user since their early invite-only beta and have had a great experience so far. They take privacy very seriously and they’re open source too!

    I can’t say much about web analytics, I don’t use any and I frankly don’t care. I don’t see why it should matter, unless you’re building a product to turn a profit.

    Lastly, a question. What about mobile? I’ve found it nearly impossible to go Google-free on mobile. As an Android user, the entire ecosystem is so tied in to Google Play Store and Services that apps nowadays just refuse to work without Play Services. It’s absurd. I have tried solutions like Micro-G, but they’re hacky at best and tend to break every other day.

    1. 5

      I can’t say much about web analytics, I don’t use any and I frankly don’t care. I don’t see why it should matter, unless you’re building a product to turn a profit.

      Even if you are selling a product, I have never seen the need to do more than analyze server logs.

      As for email, I’ve been very happy with ProtonMail.

      I host my own e-mail. I have been running my own mail server since around 1998. It used to be easy but it is getting harder and harder, purely because of spam. Frankly, it is now such a pain that I would love to out-source it but it is prohibitively expensive to do so. I have multiple mailboxes - at least one for each member of my family, and a few friends too - spread across multiple domains, which is ridiculously expensive at $5/month each. All of them are currently hosted by a single VPS at $5/month for the lot.

      1. 6

        I’ve given a shot at hosting my own mail server as well, and honestly, it isn’t worth the effort. For similar reasons as yours, the spam and too much sysadmin work for me to bother with. But yeah, outsourcing at your scale is probably not going to be cheap. What’s your setup like, are you running something like Mail-in-a-Box? Or Dovecot with Postfix?

        I have been running my own mail server since around 1998

        Your server is a year older than I am. :)

        1. 3

          What’s your setup like …?

          I use OpenSMTPd and Dovecot. Both are excellent. Neither cause any sysadmin work beyond the initial setup.

          I have used all sorts of things for spam filtering. Greylisting with OpenBSD’s spamd was very effective - i.e. blocked a lot of spam - but resulted in too many false positives and unacceptable delays. Whitelisting using SPF records helps a bit but only after the fact (and constantly updating it is a pain).

          Some spam still gets through so I need SpamAssassin as well anyway. SA is pretty good but its spamd (not to be confused with the other spamd) fell over a few times for me. This led to mail outages.

          I’ve looked at dspam and rspamd but neither appeared to offer anything more than SpamAssassin for my use case. Since I have a small number of users, Bayesian filtering has never been very effective. I found the distributed checksum services - razor, pyzor and DCC - much more useful and SA supports them all.

        2. 3

          which is ridiculously expensive at $5/month each

          I haven’t started paying (yet) but I believe Zoho are $1/user/mth, which seems quite reasonable.

          1. 1

            That’s a good price. I wonder how good they are at spam filtering?

            1. 2

              Not quite as good as gmail - mostly they’re a bit over-eager. It has gotten better with time as I’ve marked things as ‘not spam’. It’s maybe one or two false positives a month now (after 6 months or so of use).

              I forgot to mention earlier that they have a free plan, which works perfectly well so long as you’re happy to use their apps (it doesn’t support POP/IMAP). They do let you use your own domain but the functionality is somewhat limited (no catch-all, only one domain etc). I could happily get along on the free plan, but am going to start paying anyway.

              1. 2

                IMAP is a hard requirement for me. But their pricing is very good anyway so that’s not a problem. Their spam filtering sounds OK but not great. I might give them a try sometime - thanks for mentioning them.

        3. 4

          I must say I’ve had the opposite experience on Android. I’ve been using Lineage with no GAPPS for 18 months now and it’s been fine.

          The only micro-g element I use is unifiednlp for location, so I can get a good GPS signal. Not had any trouble with that since installing it.

          For me, apps that don’t work aren’t worth my time. 99% of what I use is from F-Droid, the rest is Whatsapp (which works fine on its own) and the occasional game from the Humble Bundle app.

          1. 4

            I’ve been without Google Play on my phone for about the same time.

            Started with CopperheadOS until that blew up then switched to Rattlesnake stack to build AOSP. It uses terraform to deploy a build environment to AWS for you. Don’t much like relying on AWS, either, but I used to build Copperhead myself and really struggled to find enough space for it. AOSP is a huge mess. I think it took a week to do the initial checkout and would take me 3 days to build it. AWS can build it in a few hours and costs me less than $1.50 a month.

            I haven’t needed Google Play or even mico-g. F-Droid has a replacement for every app I had on stock Android.

            1. 1

              Now I feel like a noob flashing lineage zips lol

          2. 3

            I used to use startpage, and found it to be incredibly slow to respond to my search requests. I stuck with it, because I liked it. My main impetus for switching was that Google returned shorter summaries in Firefox than in Chrome – probably something to do with my choice in font sizes, and startpage didn’t have that problem. Plus at the time I felt it gave better search results than DDG.

            I’ve since switched to DDG – DDG results for me in the last 6 months have been better than what I typically get on startpage. On my phone I feel DDG gives better results than Google, too (on Google I get lots and lots of the same results on every page, and Google can’t seem to decide whether to give me 10 results per page or infinite scroll – it’s different every time). But IMO the main advantage of DDG over startpage is that it’s already available as an optional search engine on most devices, without having to install or configure anything extra.

            I still occasionally fall back to startpage with !s, but I admit I’m guilty of relying too much on !g when I don’t find what I want.

            1. 1

              Same here pretty much. I just switched back to DDG because startpage was ‘too busy’ or ‘unavailable’ one too many times.

            2. 1

              What about mobile?

              Is iOS off the table?

            1. 12

              It doesn’t really matter if we had community processes or not…baroque standards and byzantine specs create evolutionary pressures that cannot be ignored.

              If a slow-but-basically-compliant web browser can’t be hacked together by an intern in a summer, and if the scope of compliance is ever-increasing, on a long-enough (but shorter than we think) timescale only large companies will ever be able to build browsers.

              So, um, maybe start actively fighting to cut down web standards to manageable sizes.

              1. 2

                So, um, maybe start actively fighting to cut down web standards to manageable sizes.

                Do you think that’s possible? I find it hard to believe that we can turn back the clock here. I’d love to but it just doesn’t seem plausible. I guess that means we have to start again somehow, which also seems implausible.

                1. 2

                  I think it might be possible, but I also think that starting again–at least for the subset of users that care about such things–is probably easier.

                  A good chunk of the web complexity is layers of cruft built to support one another and to explore a design space that we are all pretty familiar with now. I imagine that a Pareto browser wouldn’t be as gnarly as we think.

                  1. 0

                    I’m enjoying this thought experiment :)

                    Where do we start again from? An older/simpler HTML over HTTP 1.1? Something that mothra can render? Or do we build from gopher?

                  2. 2

                    Regarding “a slow-but-basically-compliant web browser”, I’d love to see how much browser functionality can be “polyfilled” by libraries that are generic enough to run across various browsers, cutting down how much needs to be implemented by a browser itself.

                    For example, if a browser implements JS DOM manipulation, would it need to bother implementing CSS, or could it just inject a generic JS script for handling CSS? What if we only implement WebAssembly, and use that to run a JS interpreter? I imagine implementing canvas would allow lots of rendering-related things to be offloaded (images, fonts, layout, etc.) although personally I wouldn’t want to lose the text-first nature of the Web (e.g. selection, copy/paste, search, etc.).

                    1. 2

                      This is a cool idea! I can imagine a cut-down web browser that is basically just a sandboxed interpreter with a HTTP client and a drawing area (OK, and local storage, and audio, and…). A browser engine controlled entirely by scripts, like luakit, but… more.

                1. 2

                  Personally, I really like Netlify for publish-and-forget types of sites. You point it to a Git repo, and configure it to build your static site from it - be it with a static site generator or by copying some files. Regarding small dynamic parts: they offer ways to include dynamic elements, like a submit form, and take care of handling the data for you. I haven’t tried it myself but it looks quite simple. So, deploying a static site like my blog to Netlify simply consists of committing some changes and pushing to master.

                  For the “middle ground”: I use Gunicorn (a WSGI server which runs a few threads continuously and runs your Python code when a new request arrives) behind Nginx. Systemd takes care of keeping them running and restarting if needed.

                  Deploying to this setup is easy as with most low-traffic sites: you pull the latest changes to a repo on the server, and restart the systemd unit responsible for Gunicorn. All of this can be automated using Fabric, so there’s no need to SSH to the server manually.

                  Also handy, but behind a bit of a learning curve: using Zappa or the Serverless Framework to run your dynamic code in something like AWS Lambda - it’s very cost effective, and you don’t need to do any of the sysadmin work described above.

                  1. 2

                    Thank you. Your “middle ground” solution is something that I’ve heard a lot recently: gunicorn/uwsgi/puma behind nginx with systemd to restart as needed. This seems to be a popular solution! I think I’ll go for runit rather than systemd but I guess it makes sense to use systemd if it’s there on your OS anyway. Thank you for sharing.

                  1. 3

                    I have a git post-receive hook on my webserver that checks out the latest master to /var/www. It’s a little naive at the moment because it only handles content changes (rather than code changes, which would require the hook to re-start a node process).

                    It’s only for my personal site, but I’ve been honestly surprised at how well it has worked for the past few years. Just git push web master and the site is deployed.

                    1. 1

                      This is pretty much exactly what I have at the moment for my static sites. It’s the management of the dynamic parts - I guess that’s restarting the node process in your case - that I’m not sure about. Perhaps this can all be solved by daemontools or runit or similar.

                      1. 2

                        Any process manager should work. I use pm2 myself, which allows defining your config in a json file. I use that file to bounce the service, like ‘pm2 restart app.json’, where app.json is my pm2 config. And then I can reuse that script unchanged for any project as long as I name the config file the same.

                        And FWIW, I’m on Hetzner’s cloud servers, which are 1/4 the price of DO (under $3/mo for a 2gb server) with the trade-off of being located in Europe.

                        1. 1

                          Thanks. I am also located in Europe, so maybe I should give Hetzner a try. I use Vultr at the moment.

                    1. 5
                      • Lobsters has an ansible playbook.
                      • Barnacles is Heroku’s git-based deploy. Wanted to try nix/nixops but bounced off pretty hard.
                      • My blog is in a git repo; 95% of the time I’m using Wordpress’s update mechanism and committing results, rest of the time I ssh in and vim because it doesn’t matter if I blow it up for a few minutes.
                      • Well Sorted Version deploys via a ruby script that’s mostly just rsync -aPv --del. This is pretty typical for most of my little projects - I put it in a script because I know I’m going to forget steps after a week or two. Builds the site, minimizes assets, any random odds and ends, rsync. Could be a bash script, but there’s always that one thing I want a conditional or a regexp for and Ruby’s more comfortable for me.

                      The dynamic stuff is always PHP. I never love it, but it’s reliable, doesn’t want to be a framework that takes over the site, and there’s always a snippet or answer one google search away.

                      1. 2

                        The dynamic stuff is always PHP.

                        Thanks - a few people have said that PHP would suit this situation (i.e. mostly static with just a couple of dynamic pages) very well.

                      1. 1

                        What’s your purpose? What’s the velocity of changes to the content of the website? Who needs to make changes or submit content? Can the editor(s) use Git? Can you use a static website that consumes dynamic APIs, e.g. comments from Disqus or similar or something self-hosted?

                        Unless you’re learning or adamant about running your own server, don’t. It’ll detract from your main goal, which likely is to produce content. There are enough free hosting options out there now for static content that running your own static content engine should be an exercise served for learning.

                        1. 3

                          I already have a few static websites. I have a VPS running OpenBSD and I just use the httpd included in the base OS. I like it because there’s absolutely no sysadmin burden: it has run for years with absolutely no problems. Apart from upgrading to new OS releases, which is fast and painless and takes place twice a year at most, I don’t have to do anything at all. This is exactly as it should be for websites that I host for free.

                          However, some of the sites now have a need for dynamic components, such as a searchable image gallery. I am really unwilling to do anything that will increase my maintenance burden. I have, in the dim and distant past, written CGI programs in C and Perl… but I thought there might be another option these days.

                        1. 3

                          I dump my files into /var/www/html/ for Apache and that’s the end of it. I write all of the HTML, SVG, and whatnot by hand and when I want to add a new article I copy the HEAD and all of that from a previous article. Then, I add the article to index.html and that’s the end of it.

                          There is one question I have, however. I’ve recently wanted to add the ability to comment on my website and the current mechanism is sending me an email, as explained here, but this seems a high enough barrier that I’ll receive very few, as I’ve received none so far. I figured it would be sufficient to add a form to that page, but I don’t know how to attach an arbitrary program to a form. As I’ll be doing this all manually, it would even be enough to simply have Apache log the POST requests sent to a certain URL, but every option I’ve come across so far requires me to either install something or perform some heavy configuration, both of which I’m reluctant to do.

                          I suppose I could tell it to vomit the POST to a port I have listening, but surely there’s a better way, right?

                          1. 3

                            This is actually a great use case for php. Since you already have Apache set up, you would use it with mod_php and direct the post request to a php controller. From there you can do whatever you want in process, probably the simplest thing would just have the server itself email you and then optionally log the data or action somewhere. So no messing around with sockets or other processes on the host side or spinning up and managing arbitrary interpreters as the module handles those details.

                            1. 1

                              You can also write cgi modules for python and your favorite language, though php is basically made for this. So you have choices that can also lead to “easy to write scripts for single endpoints”

                            2. 2

                              This is close to the sort of thing I’m talking about. Static assets are easy - but what do you do when you just want to add a small dynamic part, like a contact form? A full “web app” with the accompanying sysadmin headaches just seems like massive overkill.

                              1. 2

                                Couldn’t you just use some plain JavaScript via XMLHttp​Request?

                                POST Example

                                var xhr = new XMLHttpRequest();
                                xhr.open("POST", '/server', true);
                                //Send the proper header information along with the request
                                xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
                                xhr.onreadystatechange = function() { // Call a function when the state changes.
                                    if (this.readyState === XMLHttpRequest.DONE && this.status === 200) {
                                        // Request finished. Do processing here.
                                // xhr.send(new Int8Array()); 
                                // xhr.send(document);
                                1. 2

                                  I don’t know JavaScript and I don’t know PHP. I want to avoid adding something to Apache and if I were to use any JavaScript, then it wouldn’t work in Links, Lynx, w3m, Dillo, or Netsurf; it would be the only JavaScript on my entire website. I’ve taken special precaution to have my website be usable in every WWW browser I test with and so that’s not an option.

                                  Really, I’m actually surprised this is so complicated, all to add a single form that wouldn’t even create a dynamic page.

                                  1. 2

                                    It sounds like you want cgi scripts then. https://httpd.apache.org/docs/2.4/howto/cgi.html which still take a touch of config.

                                    Or you want to log the post data and just manually grep it later, this page gives you three options, two of which are modules and the other one is use an application layer: https://www.simplified.guide/apache/log-post#log-post-request-data-in-apache-in-application.

                                    On the bright side you definitely don’t need js to send a simple post so that’s good.

                              1. 11

                                My deploy is a target in a Makefile (make deploy) that builds system-installable packages (for archlinux, because that’s what I deploy on, but it could reasonably be adapted for anything else if the tooling exists), copies the packages over to my machine via scp, and uses ssh to both install the packages and restart the necessary services on the target.


                                • very minimal and easy to understand (it’s a few lines of bash)
                                • easily retargetable (all I need to do is shift my credentials and the machine specification and it can deploy to a different target)


                                • it is not through CI/CD (though it could easily be if I actually set a runner up, as all the CD step needs to do is run make deploy)
                                • if you don’t like using system package-managers, that’s a con (personally, I like them)

                                This has worked really well for me for quite some time. Though, to be fair, my website isn’t hugely complex.

                                All the best,


                                1. 2

                                  I do all of this except I don’t do system packages anymore because I want a repo of my deployed built artifacts. Also I use a CI/CD system.

                                  1. 1

                                    This sounds interesting, thank you. When you say that you “restart the necessary services on the target”, are those services part of the package that you’ve just installed?

                                    So you have some services continuously running, serving HTTP or FastCGI or something behind nginx? And you use the OS’s service management to restart them when you deploy? Is that systemd on ArchLinux these days?

                                    1. 3

                                      A little more detail should clear things up. My website is written in plain C using a library called lwan. I authored some very small .service files (for systemd, because that’s what Arch uses) which control the site(s). One of the packages my deploy target builds includes all the files for the site (assets, .service files, and the C programs themselves). The services that are being restarted are the website itself.

                                      So, in answer to your specific questions: yes, the service files are provided by one of the packages. The services are not FastCGI, but they are indeed running an HTTP server. And yes, I use systemd, because of Arch.

                                      This is a topic for another thread and another time, but I may be on the cusp of switching away from archlinux, and the place I move will likely not have systemd (though that’s not the reason I am switching), so part of this infrastructure may need to be rewritten. Though, admittedly, not much: really just the service files themselves and this deploy procedure to operate with the new, analogous tooling.

                                      All the best,


                                      1. 2

                                        Is that Lwan Web Server that you use as a library? I forgot the web site said it can be used that way. Looking at minimalist and securable servers, it was one of the most exciting things I found. I keep mentioning it in different threads so more people try or review it.

                                        If that’s it, what’s its setup difficulty and performance like compared to Nginx? And I mean Nginx if you install a package followed by one of the setup guides out there that try to make it easy.

                                        1. 4

                                          @nickpsecurity, indeed it is. I do not know if the official website documents this functionality. Actually, one of the only things that I do not love about lwan is its lack of documentation (for the most part, it is more useful to read the code).

                                          I think it is excellent for performance, and it did quite well in the techempower benchmarks; though, I should say explicitly that I have not done exhaustive performance tests or comparisons with nginx. If using the binary form, it is quite simple to use (and configurable via lua, which is a usability plus in many ways). When used as a library, it is a bit more work, but the result is that you only have installed exactly what you need (which is something that speaks to me more and more).

                                          If you are interested in how I set my site up, you can find the site itself at https://halosgho.st and the source-code (including the Makefile with deploy target) on my github. I also idle on freenode much of the time (freenode account halosghost); please feel free to ping me if you wish to have a more thorough discussion.

                                          1. 2

                                            Appreciate it. Yeah, I like the install just what you need philosophy. Better docs is actually an easy area for folks to contribute. I’ll try to remember your offer if I do anything with it. :)

                                        2. 2

                                          Thank you for going into detail - much appreciated!

                                    1. 3

                                      All of my current stuff is in Ruby. I set it up so that I can git push from my local dev environment to the server. I then have a shell script on the server that I ssh in and run that updates the bundle, runs any migrations, builds the new assets, and restarts the Puma server process. It’s been pretty simple and painless so far.

                                      1. 2

                                        Thank you - so the Puma server sits behind nginx or something similar? Is there anything monitoring Puma? If your Ruby app crashes - there’s an uncaught exception or something - does it get restarted?

                                        1. 1

                                          Yup, it’s behind Nginx. The Puma server and a DelayedJobs worker are running as SystemD-managed services, so SystemD handles keeping them running through system restarts and that sort of thing. I’m pretty sure it would restart it if the whole process crashed, but that’s never happened.

                                      1. 4

                                        CGI (and fastcgi) still exists, but even very simple frameworks often use http now, with nginx serving as a proxy.

                                        1. 3

                                          So if I use one of these frameworks to create an HTTP server, how do I manage it? daemontools? runit? Something like that?

                                          A nice thing about CGI is that you don’t have to worry about service management or monitoring. If I have something continuously running and serving HTTP, do I need to worry about restarting it if it crashes, cleanly restarting when I deploy a new version, etc. Or is this not really very difficult?

                                          1. 5

                                            For my that-kind-of-scale website: Also small http servers behind nginx. And those were initially running from tmux, later I set up some user systemd services (because systemd was what’s available on that machine). Although I remember some trouble getting logging of user services to work well.

                                            1. 2

                                              Thank you - this is the sort of thing I just don’t know about. A small HTTP server sounds great for local development - I just worry about the sysadmin effort required to keep it going reliably on a real server. For small projects, especially unpaid ones, I really don’t want unnecessary sysadmin work!

                                        1. 5

                                          I have a VCS for the source, and a VCS for all of my websites deployed HTML/binaries/etc served on a single host(which is basically all of my personal stuff)

                                          So my CI/CD will take commits from each individual src repo, build the code and then commit it to a subdir of a different VCS repo. Then my host runs a VCS checkout every 5 minutes from the combined deployable repo, and runs a script in that repo that will restart anything if needed.

                                          so an example:

                                          VCS blah.com sources in repo A VCS example.com sources in repo B deployable code for example.com and blah.com in repo C

                                          when I push a commit for blah.com in repo A, my CI/CD will pull it down, and pull down repo c. then build and do any tests, and then push the built code into repo C under a blah.com/ dir.

                                          Then every 5 mins my VPS host pulls down repo C and runs Make that will restart anything if needed. (under a special www-cron user)

                                          This way repo C has all of my deployable websites and history of what has been deployed. Also, this way my CI/CD doesn’t get direct access to my VPS host. Since I don’t run the CI/CD (it’s a SAAS ) I don’t have SSH keys out there waiting to be gobbled up by the latest hack.

                                          1. 1

                                            Using VCS to track the history of deployments is an interesting idea. I usually try to keep build artefacts out of VCS… but this is an interesting use case, thank you.

                                            1. 2

                                              I agree, you don’t want built code/html/etc out in your VCS normally, and it’s only in the deployable www VCS that I put it, not in the source repo(s).

                                              I do it this way for the reasons above, but also so that I can very easily go back and see what my website was on X day, it’s my own personal version of archive.org, done the lazy, lazy way.

                                              Another way to do this would build system packages of your deployed code, and archive the built packages.. but using a VCS is actually a win here, since only the diff’s are stored (usually, depending on VCS).

                                          1. 5

                                            slowcgi(8) on httpd(8) on OpenBSD works for me :~)

                                            1. 3

                                              My static sites are currently served by httpd(8) on OpenBSD. I have been thinking about using slowcgi(8)! That’s pretty much what prompted this question, actually :)

                                            1. 52

                                              Go has community contributions but it is not a community project. It is Google’s project. This is an unarguable thing, whether you consider it to be good or bad, and it has effects that we need to accept. For example, if you want some significant thing to be accepted into Go, working to build consensus in the community is far less important than persuading the Go core team.

                                              This is, essentially, not that different from how most projects work. Even projects which have some sort of “community governance” seldom have voting rounds where everyone can vote. Only contributors/core members can vote.

                                              Accepting all PRs is clearly not a good idea, so you need to do some gatekeeping. The biggest source of disagreement seems to be on exactly how much gatekeep is needed. The Go authors have pretty clear visions on what should and should not be in the language, and gatekeep a bit more than some other languages. Putting stuff in the language can also be problematic (see: Python’s := PEP drama).

                                              On the specific point of generics (sigh, again), I think the premise of that tweet is wrong. It suggests that the overwhelming majority of the community is just screaming for generics, and the Go Overlords stubbornly keep say “no”. That’s not really how it is. In the most recent Go survey 7% gave “lack of generics” as a response to “what is the biggest challenge you face today?” which is hardly overwhelming (although it’s not a clear “would you prefer to see generics in Go”, so not a complete answer).

                                              Anecdotally, I know many Go programmers who are skeptical or even outright hostile to the idea of adding generics to the language, although there are obviously also many Go programmers who would like to see generics. Anecdotally, I’ve also noticed that preference for generics seems negatively correlated to the amount of experience people have with Go. The more experience: the less preference for generics. I’ve seen people with a C# or Java background join our company and strongly opine that “Go needs generics, how could it not have them?!”, and then nuance or even outright change their opinion over the months/years as they become more familiar with the language and why the decisions were made.

                                              The author of that tweet claimed in the Reddit thread:

                                              I am suggesting that implementation of generics will be easy . All am suggesting is we (community) should implement prototype or so proof of concept and present it to committers .

                                              Which seems to suggest that this person is not very informed on the topic. The Go authors have been writing and considering generics for at least 10 years, and thus far haven’t found an approach everyone likes. You can reasonably agree or disagree with that, but coming in with “oh it’s easy, you can just do it” is rather misinformed.

                                              The Elm guy had a good presentation a while ago (“The Hard Parts of Open Source”) where he shared some of his experiences dealing with the Elm community, and one of the patterns is people jumping in on discussions with “why don’t you just do […]? It’s easy!” Most top-of-the-head suggestions to complex problems you can type up in 5 minutes have quite likely been considered by the project’s authors. They are not blubbering idiots, and chances are you are not genius-level smart either.

                                              This is also the problem with a lot of the “conversation” surrounding generics in Go. People like this guy jump in, haven’t seem to informed themselves about anything, and shout “why don’t you just …?!”

                                              Sidenote: I stopped commenting on anything Go-related on /r/programming as there are a few super-active toxic assholes who will grasp at anything to bitch about Go (even when the thread isn’t about Go: “at least it’s not as bad as Go, which [. rant about Go ..]”. It’s … tiresome.

                                              1. 26

                                                I think the premise of that tweet is wrong. It suggests that the overwhelming majority of the community is just screaming for generics, and the Go Overlords stubbornly keep say “no”. That’s not really how it is.

                                                Be wary of selection bias here: if someone really thought generics was important they wouldn’t be in your community to be asked the question. If the goals of the language are to serve the people already using it, thats a fine thing, but if it’s to grow then that’s harder to poll for.

                                                1. 10

                                                  Every community is biased in that sense. People who dislike significant whitespace aren’t in the Python community, people who dislike complex syntax aren’t in the Perl community, etc.

                                                  I don’t think the Go team should poll what random programmers who are not part of the Go community think. I don’t even know how that would work, and I don’t think it’s desirable as the chances of encountering informed opinions will be lower.

                                                  1. 6

                                                    Anecdotally, I’ve also noticed that preference for generics seems negatively correlated to the amount of experience people have with Go. The more experience: the less preference for generics.

                                                    This part was also concerning to me. If Go is “blub”, then of course people who are more used to not having generics wouldn’t necessarily think generics are preferential.

                                                    1. 8

                                                      I don’t think this fits the “blub” model. People who have only used “blub” don’t understand what they are missing. But here we are talking about people who have got experience with generics: the more experience with Go they gain, the more they understand why Go does not have them.

                                                    2. 2

                                                      There is the old adage of being unable to please everybody.

                                                      It’s better to cater to the crowd you have than the whims of random people.

                                                    3. 13

                                                      In the most recent Go survey 7% gave “lack of generics” as a response to “what is the biggest challenge you face today?” which is hardly overwhelming (although it’s not a clear “would you prefer to see generics in Go”, so not a complete answer).

                                                      I think it’s also worth mentioning that “lack of generics” is the third biggest challenge in that survey (after “package management” and “differences from familiar language”).

                                                      1. 7

                                                        “I am suggesting that implementation of generics will be easy”

                                                        Do you have a link to this comment? The way it’s phrased makes me think that it’s a typo and they meant to say “I am not suggesting that implementation of generics will be easy”.

                                                        1. 8

                                                          For outrageously inflammatory post titles like this one, I skip straight to the Lobsters top comment. Cunningham’s Law hasn’t failed me yet.

                                                          1. 5

                                                            Reminder that Go has generics for a small set of built-in data types, just not user-defined generics. Let’s be explicit: the language already has generic types in its syntax, e.g.:


                                                            It’s not a great stretch from this to something like tree[int]. Given this, the fact that the language designers have put it off for so long, and so much of the community is antagonistic towards it (where does that antagonism come from–where did people pick up on it?)–it’s not a big stretch to infer that they simply don’t want Go to have user-defined generics.

                                                            1. 5

                                                              where does that antagonism come from–where did people pick up on it?

                                                              I picked it up in the C++ community. From build times to breakage to complexity, I have repeatedly implemented external generics (code generation) solutions that were simpler to manage and gave far better results for my projects than using templates.

                                                              1. 1

                                                                Those are not really generic types but variable-length types. Not exactly the same.

                                                                it’s not a big stretch to infer that they simply don’t want Go to have user-defined generics.

                                                                The catch with your inferration (is that a word? Is now I guess) is that the Go authors have explicitly stated otherwise many times over the years, and have written a number of proposals over the years (the Go 2 Contracts proposal lists them). There have also been a number of posts on the Go issue tracker, Reddit, HN, etc. by Go authors stating “we’re not against generics, we’re just not sure how to add them”.

                                                                1. 3

                                                                  Those are not really generic types

                                                                  If you take a look at your linked design document, e.g. maps are repeatedly used as generic types in the proposed Go polymorphic programming design.

                                                                  Go authors have explicitly stated otherwise many times over the years, and have written a number of proposals over the years

                                                                  Good point, before the generics design document was published I suppose my inference would be more credible. Now I guess they are serious about generics–which seems unfortunate for the people you mentioned who vehemently hate them :-)

                                                            1. 6

                                                              A use of openBSD as development platform for Java, JavaScript and Android app development – is problematic. So a Linux OS is significantly better.

                                                              1. there is no android studio on openBSD (and emulators will not work)

                                                              2. There is no ready to go modern javascript GUI dev environment there (FB’s flow bin does not work on this os, there is no VisualCode in packages)

                                                              3. IntelliJ IDE works pretty well on openBSD, and is kept up todate.
                                                                But if you are trying to use an GUI remotely (eg if openBSD is in VM or on another server) – you will be for a big disappointment. As the only vnc/rdp server on OpenBSD – is x11vnc, and it is a polling system that reads framebuffer, and quite slow (and uses up significant CPU time). So GUI-based IDEs are, essentially, not usable when installed on a remote (or VM) instance of openBSD

                                                              4. There are no VirtualBox guest additions for OpenBSD

                                                              5. The referenced posts compared some of the OpenBSD ‘built-in’ features eg http to apache/nginx.

                                                              But feature-wise they are not really comparable. OpenBSD’s http (last I check) does not support even HTTP2, and in my understanding, the lead developer on it – did not think that the protocol is relevant. This hampers somewhat OpenBSD as a hosting plataform, the hosts such as OpenBSD Amsterdam, only offers configured hosting based on OpenBSD’s http – therefore no http2 support, if you go with built in http daemon. I am sure there are many other features that are in ngnix or apache that are not in openBSD.

                                                              1. no docker or other comparable tech

                                                              Overall, I personally, appreciate openBSD philosophy of excellence and almost ‘stoicism’ in their choices. If OpenBSD team chooses to concentrate on something, it will be excellent not just good. If they decided that it is a bit of ‘fluff’ or not main area for them (eg this is not a GUI developer workstation) – a choppy support will be in packages and blog posts.

                                                              Seems like OpenBSD chose to concentrate on:

                                                              1. Documentation
                                                              2. Cohesive, exceptionally thought out configuration management for security features
                                                              3. Attention and good support to non-intel platforms
                                                              4. Excellent default setup for console and console development tools (everything is thought out, including fonts) :-)
                                                              5. Excellent selection of packages that are also curated from the security angle, for such a small team. JDK 11 and Node 10, are already there – just weeks after releases of those for openBSD (in case of JDK).
                                                              6. TMUX is built-in !. This tells me that OpenBSD team is well aware that OpenBSD setups are often used as login hosts to manage remote environments (which is also why excellent support for anything to do with console).

                                                              So yes, the OS is a good choice for a full internet appliance for login gateways, backend servers (but perhaps not multi node clusters).

                                                              And again, I like the approach and ability to reject, what’s not in scope – this is a good practice to learn, even as a management skill!

                                                              For me, making our backend at least testable on openBSD is a goal, perhaps one day using OpenBSD in prod to host jetty and postgres parts of our backend, would be very much desired.

                                                              1. 11

                                                                TMUX is built-in !. This tells me that OpenBSD team is well aware that OpenBSD setups are often used as login hosts to manage remote environments (which is also why excellent support for anything to do with console).

                                                                tmux(1) author (Nicholas Marriott) is an OpenBSD developer.

                                                                1. 2

                                                                  I did not know that!. thank you

                                                                  1. 3

                                                                    No worries! :^)

                                                                    https://www.openbsd.org/innovations.html is an interesting read in general.

                                                                    I.e. while we’re at the subject of developers, probably not many people know that sudo is now maintained by Todd Miller - he isn’t the original author, though.

                                                                    lobste.rs has been started by another, and @tedu honks ;^)

                                                                2. 8

                                                                  I am sure there are many other features that are in ngnix or apache that are not in openBSD.

                                                                  Both nginx and apache run on OpenBSD.

                                                                  1. 3

                                                                    Right, of course, this is correct.

                                                                    I was just commenting on the twitter thread referenced in the submission. In there, there was a mention that alternative to Linux’s nginx/apache is OpenBSD’s http.

                                                                    But feature wise, those do not match.

                                                                    New OpenBSD hosting services like OpenBSD Amsterdam, offer opinionated preconfigured VMs, specifically, configure OpenBSD instances with OpenBSD’s built in packages.
                                                                    https://openbsd.amsterdam/setup.html So that means http2 is not supported In their default config.

                                                                    Just wanted to reflect, that features of openBSD built-in packages – matter, even if there are 3rd party packages that are available to overlap with the built-in features.

                                                                    1. 2

                                                                      openbsd built in software is not built to be feature packed but to be reliable and secure. it’s worth considering what the priorities of the developers are when you use the software that they so graciously shared with you.

                                                                      1. 1

                                                                        And because all software is well written - it is easy to just extend and recompile, and keep your new feature as a patch.

                                                                1. 2

                                                                  I am not ashamed to say that I program in C, and that I enjoy it. This puts me at odds with much of programming language discourse, among both researchers and influential practitioners, which holds that C is evil and must be destroyed.


                                                                  Look, I get it, you like to program in C. Please continue to do so.

                                                                  If someone says “it would be great if we had a safer C”, why are you taking it as a personal attack? They’re just saying they want a safer C, not that you need to stop using it. It’s not like C will go away.

                                                                  Sure, the C community may change and shrink. But if the language is the important thing to you, then keep using the language.

                                                                  If, on the other hand, you’re worried about losing the widespread support and community of a first-class language, well, then you’ll have to make a call (not yet, mind you, but maybe in 5-10 years). And the decision will be: use the language you love for most things, and use different languages for most things. The former is still very viable – look at the Common Lisp community, for example – and the latter still allows you to use your preferred language for your own stuff (I still code in SML, myself, sometimes).

                                                                  But I’ll admit I’m tired of people seeing “C has flaws, and we could do better” and interpreting it as an existential threat. Calm down, figure out your priorities, and use whatever language you want to use.

                                                                  1. 2

                                                                    “If someone says “it would be great if we had a safer C”, why are you taking it as a personal attack? They’re just saying they want a safer C, not that you need to stop using it. It’s not like C will go away.”

                                                                    That’s not what they were saying in the quote. It was people who said C is evil, must be destroyed, and with implication everyone should avoid it. If it’s business-, safety-, or life-critical code, there is even some justification to that if safer alternatives are available and usable.

                                                                    Regardless, I agree with your point that they shouldn’t worry what folks think. Use it if you want to. Don’t if you’re against it. That’s how most things work in life. :)

                                                                    1. 2

                                                                      Responding mostly to clarify, since I think we’re largely in agreement.

                                                                      It was people who said C is evil, must be destroyed,

                                                                      I guess that’s part of my problem with these reactions: I’ve seen people say things like that, but generally out of frustration, not as a policy position. Even Perl programmers say things like that about Perl, but that doesn’t imply an actual plan to migrate, kill Perl, etc.

                                                                      and with implication everyone should avoid it.

                                                                      I’ve definitely seen more of that attitude. That said, I think it falls into the “change happens, so change or don’t” bucket that I was describing above.

                                                                    2. 2

                                                                      If someone says “it would be great if we had a safer C”, why are you taking it as a personal attack?

                                                                      They aren’t. The article itself says that it would be great if we had a safer C and goes on to explain how that might be done.

                                                                    1. 2

                                                                      This is excellent! It is great to read an article that really articulates the value of C - the things that it does better than other languages, including many proposed replacements for C - and also provides concrete suggestions for improving the safety of C implementations.

                                                                      1. 3

                                                                        A disturbing number of kernel buffer overflows are not classified as security, unless those can all only be triggered by root I doubt I agree with the classification scheme.

                                                                        1. 1

                                                                          I included overall stats as well as “security fix” only stats to avoid relying on OpenBSD’s own categories.

                                                                        1. 7

                                                                          I looked at 6.1. 008 wasn’t really a bug, just preventative maintenance for stack clash. 012 was logic error related to size of the freed memory, which triggers an assertion. 016 was logic error, triggered an assertion in malloc.

                                                                          But cool write up.

                                                                          1. 2

                                                                            Thank you! I’ve corrected 6.1 012 and 016. I’ll leave 008 in the table since it was a published errata patch.

                                                                          1. 1

                                                                            Does anyone have a list of the benefits/drawbacks they are referring to? Is there any discussion I can refer to?

                                                                            1. 7

                                                                              today’s date is the primary reference

                                                                              1. 1

                                                                                oh, hahaha I feel dumb now

                                                                              2. 1

                                                                                Check the date…

                                                                              1. 7

                                                                                The first half of this article is the important bit. This really is a question of language design: philosophy, design goals, affordances, etc.

                                                                                Programming in Go feels like programming in a better C. Programming in Rust feels like programming in a better C++.

                                                                                The rest of the article seems like an attempt to rationalize this feeling, which is a sensible thing to do but - as others have pointed out - not all the rationalizations are really fair.

                                                                                The initial point remains though: we might want a memory-safe C but Rust is not it. Rust - in design and feel - is a memory-safe C++. Whether that bothers you or not depends on your view of C++.

                                                                                1. 13

                                                                                  The initial point remains though: we might want a memory-safe C but Rust is not it. Rust - in design and feel - is a memory-safe C++. Whether that bothers you or not depends on your view of C++.

                                                                                  I think Rust and C have more in common than Rust and C++.

                                                                                  • Fundamentally, Rust is a struct oriented language where you define functions that take the struct as an argument - just like C. C++ is an object oriented language with inheritance trees, function overriding (runtime dispatch tables), etc. Traits in Rust make working with structs feel superficially like OO, but in reality it’s more like defining interface implementations for structures so you can use them interchangeably, which is actually very different from OO.
                                                                                  • Neither Rust nor C have exceptions, where C++ does.
                                                                                  • Reasoning about when structures are deallocated in Rust is more like C than C++ (both C and Rust have trivial memory management rules for developers, where C++ has relatively complex rules and norms about how and where to define destructors, and it is easy to screw up deallocation in C++).

                                                                                  As someone who writes mostly C for a living the Rust model looks fairly straightforward, where the C++ model looks relatively complex. Rust is just structures and functions, where C++ is objects and templates and exceptions and abstract virtual base classes and other stuff that makes it non-obvious what’s actually happening when your code runs. To me, Rust feels like a memory and thread safe C (with interfaces), and less like a memory and thread safe C++.

                                                                                  1. 2

                                                                                    Rust is definitely preferable to C++. I haven’t programmed in Rust much but what I have done I have enjoyed. Sadly, I have programmed in C++ for a couple of decades, none of which I particularly enjoyed (although C++11 did make some things less painful).

                                                                                    C++ programmers rarely use the whole language, not least because it is impossible for any one person to remember the whole language at once. People tend to find a subset of the language that they think they understand and stick to that. These days, I see mostly functional-style C++. I don’t see much use of inheritance or exceptions. The C++ that I have written and code reviewed in the last five years or so looks quite a lot like Rust.

                                                                                    Rust does it better. It’s like a nice subset of C++ with better ergonomics. Its type system is more pleasant to use than C++ templates. Its support for functional programming is better. And, of course, it has RAII that really works because the compiler ensures that it is safe.

                                                                                    I don’t know what the right term is: aesthetics? style? feel? mindset? Whatever it is, Rust shares it with C++. I don’t think it is an insult to say that it is C++ done right.

                                                                                    I would contrast that with C. Some people may use C because they have to: they need something with manual memory management for example. But manual memory management is not a design goal for C. The core ethos of C, perhaps now diluted by standards bodies and compiler writers, is simplicity. It’s about having a small number of orthogonal building blocks from which bigger things can be made. I think that ethos has been passed on to Go.

                                                                                  2. 11

                                                                                    wat. How is golang in any way like a “better C”? I remember when it came out some touting it that way and I got excited… and then it just isn’t that at all IME. The GC alone disqualifies

                                                                                    1. 2

                                                                                      I agree. I’d express it as: C and Go value simplicity. C++ and Rust, they do not sacrifice simplicity for nothing, but they are eager to trade off simplicity with almost anything.