1. 5

    Planting some chillies. Looks like they’re going to take a few months to grow, that’s even longer than it takes to compile Firefox from source which I didn’t know was possible :’(

    1. 1

      I wonder, is this due to Servo being written in Rust or was it always slow to begin with?

      1. 1

        Worst case is around 80 minutes on my Lenovo x series laptop. You can get it down to 10 with a regularly updated tree, a decent machine or an sccache cluster.

        Good luck with your chili project. Sounds like a better plan than compiling Firefox Ina weekend for sure :-)

      1. 2

        Battling this flu that is currently still very much winning. Finding someone that is both skilled enough at PHP and JavaScript and willing to sit in my place and take over development and support for my WordPress plugins. The first part on its own is easy, but combining it with the latter makes it incredibly hard. Even though I’m willing to pay (really) well. It strengthens my resolve for getting out of this golden cage, because if others are “picky”, why not me?

        1. 4

          Finalizing my tree-walking interpreter for the Monkey language (written in C, code here: https://github.com/dannyvankooten/monkey-c-monkey-do) from the wonderful “Writing an interpreter in Go” book by Thorsten Ball. Had so much fun building this, I can’t wait to get started on his second book in which the project continues to build a bytecode VM.

          1. 2

            Totally love the name! Now you need to figure out which companion tools you can call “see-no-evil” and “hear-no-evil” :)

          1. 13

            I found this to be a lovely 30-minute read on C’s motivation, history, design, and surprising success. I marked down over 50 highlights in Instapaper.

            If you haven’t written C in awhile, you should give it a try once more! Some tips: use a modern build of clang for great compiler error messages; use vim (or emacs/vscode) to be reminded C “just works” in editors; use a simple Makefile for build/test ergonomics.

            In writing loops and laying out data in contiguous arrays and structures, remind yourself that C is “just” functions, data atoms, pointers, arrays, structs, and control flow (plus the preprocessor!)

            Marvel at the timeless utility of printf, a five-decade-old debugging Swiss army knife. Remember that to use it, you need to #include <stdio.h>. As Ritchie laments here, C regrettably did not include support for namespaces and modules beyond a global namespace for functions and a crude textual include system.

            Refresh your memory on the common header files that are part of the standard and needed for doing common ops with strings, I/O, or dynamic heap allocation. You can get a nice refresher on those in your browser here:

            https://man.cs50.io/

            Overall, you’ll probably regain some appreciation for the essence of programming, which C represents not due to an intricate programming language design with an extensive library, but instead due to a minimalist language design not too far from assembly, in which you simply must build your own library, or rely directly on OS system calls and facilities, to do anything useful. There is something to be said for languages so small they can truly fit in your head, especially when they are not just small, but also fast, powerful, stable, ubiquitous, and, perhaps, as timeless as anything in computing can be.

            1. 5

              I don’t think that a lack of namespaces is really something to lament. _ is prettier than ::. For all the handwringing you see about them, I’ve literally never seen a symbol clash in C ever.

              1. 6

                I love C and it continues to be my first language of choice for many tasks, but namespaces are the first thing I’d add to the language if I could. Once programs get beyond a certain size, you really need a setting between “visible to one file” and “visible EVERYWHERE”. (You can get some of the same effect by breaking your code up into libraries, but even then the visibility controls are either external to the language or non-portable compiler extensions!)

                And for the record, I’d prefer an overload of the dot operator for namespaces. Or maybe a single-colon operator – some languages had that before the double-colon became ubiquitous.

                1. 2

                  I tend to agree that this isn’t a huge issue in practice, especially since so very many large and well-organized C programs have been written (e.g. CPython, redis, nginx, etc.), and the conventions different teams use aren’t too far apart from one another. As you noted, they generally just group related functions together into files and name them using a common function namespace prefix, like ns_. But, clashes are possible, and it has meant C is used much more as a starting point for bespoke and self-contained programs (again, CPython, redis, and nginx are great examples), rather than as a programming environment to wire together many different libraries, as is common in Python, or even Go.

                  As dmr describes it in the OP, this is just a “smaller infelicity”.

                  Many smaller infelicities exist in the language and its description besides those discussed above, of course. There are also general criticisms to be lodged that transcend detailed points. Chief among these is that the language and its generally-expected environment provide little help for writing very large systems. The naming structure provides only two main levels, ‘external’ (visible everywhere) and ‘internal’ (within a single procedure). An intermediate level of visibility (within a single file of data and procedures) is weakly tied to the language definition. Thus, there is little direct support for modularization, and project designers are forced to create their own conventions.

                  1. 3

                    I don’t really think that namespaces are the reason people don’t use C for gluing together lots of other C programs and libraries. I think people don’t do that in C because things like Python and Bash are a million times more suitable for it in a million different ways, only one of which is namespaces.

                    Large systems don’t need to all be linked together with one big ld call. Large systems should be made up of small systems interacting over standardised IPC mechanisms, each of which of course have their own independent namespaces.

                    There’s also the convention we see of lots of tiny files, which is probably not actually necessary today. It made more sense in there days of centralised version control and global file locking in very old version control systems where merging changes from multiple people was difficult or impossible and one person working on a file meant nobody else could. But today, most modules should probably be one file. Why not?

                    For example, OpenBSD drivers are usually a single .c file, for example, and they recommend that people porting drivers from other BSDs merge all the files for that driver into one. I actually find this easier to understand: it’s easier for me to navigate one file than a load of files.

                2. 4

                  If you haven’t written C in awhile, you should give it a try once more! Some tips: use a modern build of clang for great compiler error messages; use vim (or emacs/vscode) to be reminded C “just works” in editors; use a simple Makefile for build/test ergonomics.

                  I am going through the Writing An Interpreter In Go book but in C (which is totally new to me, coming from a JavaScript background) and it’s been the most fun I had in years. I’m actually starting to get quite fond of the language and the tooling around it (like gdb and valgrind).

                  1. 2

                    I recommend you take a look at http://craftinginterpreters.com as well, if you want something similar for C. The book is in two parts: the first part a very simple AST-walking interpreter written in Java, the second part a more complex interpreter that compiles the language to bytecode and has a VM, closures, GC, and other more complicated features, written in C. If you’ve already read Writing An Interpreter In Go you can probably skip the Java part and just go straight to the C part.

                    1. 3

                      Thanks, I will (after I’m done with this). I actually really liked it that the book is in Go but my implementation in C, as it made it a bit more exciting for me to think about how I would structure things in C and see what the tradeoffs are versus doing it in Go. Otherwise I’d be tempted to skip entire chapters and just re-use the author’s code, which obviously doesn’t help if my goal is to learn how it’s actually done.

                  2. 4

                    so small they can truly fit in your head

                    Very true. One thing I’ve noticed, going to C from Rust and especially C++, is how little time I spend now looking at language docs, fighting the language or compiler itself, or looking at code and wondering, “WTF does this syntax actually mean?”

                    There’s no perfect language though. I do pine sometimes for some of the fancier language features, particularly closures and things which allow you to directly concepts directly in code, like for(auto i : container_type) {...} or .map(|x| { ...}).

                    1. 1

                      One thing I’ve noticed, going to C from Rust and especially C++, is how little time I spend now looking at language docs, fighting the language or compiler itself, or looking at code and wondering, “WTF does this syntax actually mean?”

                      It’s also really nice being able to type:

                      man $SOME_FUNCTION
                      

                      to get the documentation for any function in the standard library (and many others not in the standard library). I do a lot of my development on the train (without access to the internet) and man pages are my best friend.


                      On the topic of “wtf does this syntax actually mean” I do think C has some pain points. East const vs west const is still a point of confusion for many, and C’s function pointer syntax will melt your brain if you stare at it too long.

                      At one point I wrote a C backend for a compiler I was working on and needed to really understand how declarations worked. I found that this article does a really good job explaining some of the syntax insanity.

                    2. 4

                      If anyone is looking to give modern C a try I would recommend reading How to C. It’s a short article that bridges the gap between C from K&R Second Edition and C in $CURRENT_YEAR. The article doesn’t cover the more nuanced details of “good” C programming, but I think that K&R + How to C is the best option for people who are learning the C language for the first time.

                      1. 2

                        Awesome recommendation! As someone who is picking up C for some programming fun & tasks again after a decade-long hiatus (focused on other higher-level languages), this is super useful for me. I have been re-reading K&R 2nd Ed and been looking for something akin to what you shared in “How to C”.

                        I also found these two StackOverflow answers helpful. One, on the various C standards:

                        https://stackoverflow.com/questions/17206568/what-is-the-difference-between-c-c99-ansi-c-and-gnu-c/17209532#17209532

                        The other, on a (modern) set of useful reference books:

                        https://stackoverflow.com/questions/562303/the-definitive-c-book-guide-and-list/562377#562377

                    1. 4

                      Helping my brother renovate his home and hacking away on a static site generator I’m building in C. In between the usual parental obligations that come with having 2 toddlers to care for.

                      Current working name for the site generator is Coconut, unless I can think of something better.

                      Enjoy your weekends everyone!

                      1. 0

                        It perhaps is easier for you as a developer but I’d argue that a lot of web applications should be local software that can be used without an internet connection and GB’s of RAM. Let’s not forget what the web is doing with regards to surveillance capitalism (ie messing up democracies because it makes some people money) and climate change.

                        1. 6

                          Isn’t that less a problem with the web platform and more of a problem with specific web sites? Additionally, surveillance conducted on the user of a web site is much easier to see, simply by dint of the developer tools built into all browsers. The effort you need to put into seeing what a desktop application is phoning home about is much greater.

                          1. 5

                            In some cases, that’s completely true. But for some other use cases, i refuse to install a software when the service could be equivalently used as a website (this especially applies to mobile apps). Also, be aware that applications are as capable of surveillance (if not more) as web apps. The app that hosts all others for most people being Windows, is actually a a spying machine.

                          1. 2

                            Probably Rust, although I wish the compiler could be (a lot) faster.

                            1. 6

                              Learning C and a game of paintball with friends cause one of them is becoming a dad, followed up by an Indian restaurant which will probably end with me spending my Sunday on the toilet. :-)

                              1. 4

                                There’s a resurgence of interest in C apparently, looking through this thread. Kinda inspiring me to rehash it myself. After I’m done with Haskell, maybe I should go back to the kilo project I abandoned. How are you doing your learning. Do you have a structure in mind or like a book or something?

                                PS - As for the Indian restaurant, apart from asking it to be less spicy, I suggest getting a generous helping of “ghee” and finishing your meal with a “dahi” dish. The restaurant should know those words. That’s how my folks keeps the burn in check :)

                                1. 3

                                  That’s how it feels to me too, my Mastodon feed is full of people picking up C (again). Personally I just want to broaden my understanding and having recently picked up Rust, better see what it attempts to improve on. I’m going by the book, doing the basic exercises and then hopefully a small toy project.

                                  Thank you for the tips! I actually really do like spicy (Vindaloo is my favorite) and normally the afterburn isn’t that bad for me, but this particular restaurant somehow had that effect last time. It’s a restaurant in The Netherlands so my guess is that they’ve toned down all their dishes a lot compared to “the real deal”. Definitely going to try the dahi finisher!

                                  1. 2

                                    You’re welcome :) and btw been a lomg time since I read that book but I remember it being a really good one - so compact, simple and well written! It’s like the programming equivalent of The Elements of Style. Wish you well on your endeavor!!

                              1. 11

                                I’m very skeptical of the numbers. A fully charged iPhone has a battery of 10-12 Wh (not kWh), depending on the model. You can download more than one GB without fully depleting the battery (in fact, way more than that). The 2.9 kWh per GB is totally crazy… Sure, there are towers and other elements to deliver the data to the phone. Still.

                                The referenced study doesn’t show those numbers, an even their estimation of 0.1 kWh/GB (page 6 of the study) is taking into account a lot of old infrastructure. In the same page they talk about numbers of 2010, but even then the consumption using broadband was estimated as 0.08 kWh/GB and only 2.9 kWh for 3G access. Again, in 2010.

                                Taking into account that consumption for 2020 is totally unrealistic to me… It’s probably a factor of at least 30 times less… Of course, this number will go down as well as more efficient transfers are rolled out, which seems to be happening already, at an exponential rate.

                                So don’t think that shaving a few kbytes here and there is going to make a significant change…

                                1. 7

                                  I don’t know whether the numbers are right or wrong, but I’m very happy with the alternative direction here, and another take at the bloat that the web has become today.

                                  It takes several seconds on my machine to load the website of my bank, a major national bank used by millions of folks in the US (Chase). I looked at the source code, and it’s some sort of encrypted (base64-style, not code minimisation style) JavaScript gibberish, which looks like it uses several seconds of my CPU time each time it runs, in addition to making the website and my whole browser unbearably slow, prompting the slow-site warning to come in and out, and often failing to work at all, requiring a reload of the whole page. (No, I haven’t restarted my browser in a while, and, yes, I do have a bunch of tabs open — but many other sites still work fine as-is, but not Chase.)

                                  I’m kind of amazed how all these global warming people think it’s OK to waste so many of my CPU cycles on their useless fonts and megabytes of JavaScript on their websites to present a KB worth of text and an image or two. We need folks to start taking this seriously.

                                  The biggest cost might not be the actual transmission, but rather the wasted cycles from having to rerender complex designs that don’t add anything to the user experience — far from it, make it slow for lots of people who don’t have the latest and greatest gadgets and don’t devote their whole machine to running a single website in a freshly-reloaded browser. This also has a side effect of people needing to upgrade their equipment on a regular basis, even if the amount of information you require accessing — just a list of a few dozen of transactions from your bank — hasn’t changed that much over the years.

                                  Someone should do some math on how much a popular bank contributes to global warming with its megabyte-sized website that requires several seconds of CPU cycles to see a few dozen transactions or make a payment. I’m pretty sure the number would be rather significant. Add to that the amount of wasted man-hours of folks having to wait several seconds for the pages to load. But mah design and front-end skillz!

                                  1. 3

                                    Chase’s website was one of two reasons I closed my credit card with them after 12 years. I was traveling and needed to dispute a charge, and it took tens of minutes of waiting for various pages to load on my smartphone (Nexus 5x, was connected to a fast ISP via WiFi).

                                    1. 2

                                      The problem is that Chase, together with AmEx, effectively have a monopoly on premium credit cards and travel rewards. It’s very easy to avoid them as a bank otherwise, because credit unions often provide a much better product, and still have outdated-enough websites that simply do the job without whistling at you all the time, but if you’re into getting the best out of your travel, dealing with the subpar CPU-hungry websites of AmEx and Chase is often a requirement for getting certain things done.

                                      (However, I did stop using Chase Ink for many of my actual business transactions, because the decline rate was unbearable, and Chase customer service leaves a lot to be desired.)

                                      What’s upsetting is that with every single redesign, they make things worse, yet the majority of bloggers and reviewers only see the visual “improvements” in graphics, and completely ignore the functional and usability deficiencies and extra CPU requirements of each redesign.

                                  2. 9

                                    Sure, there are towers and other elements to deliver the data to the phone. Still.

                                    Still what? If you’re trying to count the total amount of power required to deliver a GB, then it seems like you should count all the computers involved, not just the endpoint.

                                    1. 4

                                      “still, is too big of a difference”. Of course you’re right ;-)

                                      The study estimates the consumption as 0.1 kWh in 2020. The 2.9 kWh is an estimation in 2010.

                                      1. 2

                                        I see these arguments all the time about “accuracy” of which study’s predictions are “correct” but it must be known that these studies are predictions of the average consumption for just transport, and very old equipment is still in service in many many places in the world; you could very easily be hitting some of that equipment on some requests depending on where your data hops around! We all know an average includes many outliers, and perhaps the average is far less common than the other cases. In any case, wireless is not the answer! We can start trusting numbers once someone develops the energy usage equivalent of dig

                                      2. 3

                                        Yes. Let’s count a couple.

                                        I have a switch (an ordinary cheap switch) here that’ll receive and forward 8Gbps on 5W, so it can forward 3600000 gigabytes per kWh, or 0.0000028kWh/GB. That’s the power supply rating, so it’ll be higher than the peak power requirement, which is in turn will be higher than the sustained, and big switches tend to be more efficient than this small one, so the real number may have another zero. Routers are like switches wrt power (even big fast routers tend to have low-power 40MHz CPUs and do most routing in a switch-like way, since that’s how you get a long MTBF), so if you assume that the sender needs a third of that 0.1kWh/GB, the receiver a third, and the networking a third, then… dumdelidum… the average number of routers and switches between the sender and receiver must be at least 10000. This doesn’t make sense.

                                        The numbers don’t make sense for servers either. Netflix recently announced getting ~200Gbps out of its new hardware. At 0.03kWh/GB, that would require 22kW sustained, so probably a 50kW power supply. Have you ever seen such a thing? A single rack of servers would would need 1MW of power.

                                        1. 1

                                          There was a study that laid out the numbers, but the link seems to have died recently. It stated that about 50% the energy cost for data transfer was datacenter costs, the rest was spread out thinly over the network to get to its destination. Note that datacenter costs does not just involve the power supply for the server itself, but also all related power consumption like cooling, etc.

                                          1. 2

                                            ACEEE, 2012… I seem to remember reading that study… I think I read it when it was new, and when I multiplied the numbers in that with Google’s size and with a local ISP’s size, I found that both of them should have electricity bills far above 100% of their total revenue.

                                            Anyway, if you change the composition that way, then at least 7000 routers/switches on the way, or else some of the switches must use vastly more energy than the ones I’ve dealt with.

                                            And on the server side, >95% of the power must go towards auxiliary services. AIUI cooling isn’t the major auxiliary service, preparing data to transfer costs more than cooling. Netflix needs to encode films, Google needs to run Googlebot, et cetera. Everyone who transfers a lot must prepare data to transfer.

                                      3. 4

                                        I ran a server at Coloclue for a few years, and the pricing is based on power usage.

                                        I stopped in 2013, but I checked my old invoices and monthly power usage fluctuated between 23.58kWh and 18.3kWh, with one outlier at 14kWh. That’s quite a difference! This is all on the same machine (little Supermicro Intel Atom 330) with the same system (FreeBSD).

                                        This is from 2009-2014, and I can’t go back and correlate this with what the machine was doing, but fluctuating activity seems the most logical response? Would be interesting if I had better numbers on this.

                                        1. 2

                                          With you on the skeptic train: would love to see where this estimate:

                                          Let’s assume the average website receives about 10.000 unique visitors per month

                                          it seems way high. We probably will be looking to a pareto distribution, and I don’t know if my intuition is wrong, but I’ve the feeling that your average wordpress site sees way way lower visitors than that.

                                          Very curious about this now, totally worth some more digging

                                        1. 5

                                          Is this motherfuckingwebsite.com clocking in at 5 kB in total really that bad in comparison? I don’t think so.

                                          You’ve got the number wrong:

                                              <!-- yes, I know...wanna fight about it? -->
                                              <script>
                                                (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
                                                (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
                                                m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
                                                })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
                                              
                                                ga('create', 'UA-45956659-1', 'motherfuckingwebsite.com');
                                                ga('send', 'pageview');
                                              </script>
                                          

                                          This is something I fail to understand. He doesn’t even use CSS to prevent the text from spreading over the entire width of the screen, but then happily references a JS blob that amounts to 44 KB to spy on the users. Káže vodu, pije víno..

                                          1. 3

                                            Haha, shoot. The script was triple blocked: first by uBlock, then by uMatrix, then by my Pi Hole. So it did not show up on my browser’s Network tab…

                                            And yes, I’m with you there. 44 kB… Nearly 10 times the size of the rest of that site. For what I think is vanity.

                                            1. 5

                                              What I found particularly amusing about this was the comment. It’s like if someone saw him get into his private airplane a few minutes after he gave an emotional talk about why people should take extreme measures to lower their carbone footprint and he just stayed there with a guilty face: “I know, I know…”.

                                              It creates an illusion that there’s nothing we could do, practically speaking. We could theoretically build better websites, but not even the strongest advocates do actually bother.

                                              I don’t actually think using GA is that bad. I don’t like it on a personal/ideological level, but I’m not fanatical about it and can see myself using it too in some scenarios. Here, it’s all about the contrasts: no styling, the page that’s to be perceived as ugly and boring by many; everything is as minimalistic as it can be. And then bum, let’s load 44 KB worth of some JS blob.

                                              Had the objective been to criticize the worst-of-worst bloated websites that take hundreds of milliseconds to load on a modern computer with a decent connection for seemingly no reason and demonstrate that things can be simpler (on something people could actually imagine using; such as a news portal or magazine, which commonly contain the most bloatware), then it wouldn’t be such a big deal to add some extra 40 KB. But taking extreme measures only to throw any advantage away a few seconds later doesn’t make much sense.

                                              Oh, and an interesting article of yours (I forgot to mention).

                                          1. 6

                                            This has never crossed my mind before, thank you for getting me (and others) thinking about it :)

                                            Now I wonder what the power overhead of interpreting PHP is over a language that gets turned into native code AOT.

                                            1. 6

                                              You may enjoy this article (Lobsters discussion) on the energy usage of various languages. PHP isn’t the best but it isn’t the worst, either.

                                              1. 5

                                                That’s awesome - I’m glad it was of value! Me neither. I stumbled upon a number that said a GB of data costs about 5 kWh to transfer (about half of it was datacenter, rest spread out across the network) and it blew my mind. If my home network was that inefficient it would mean an hour of streaming House of Cards in Ultra HD is just as bad as spending that same time in a moving gasoline car…. Luckily, that number seems way too high nowadays and fixed broadband connections are a lot more efficient.

                                                And yeah, Rasmus Lerdorf did a talk a few years ago talking about the CO2 savings if the entire planet updated to PHP7. Here’s a link to the relevant section.. TLDR: at 100% PHP 7 adoption, 7.5B kg less CO2 would be emitted.

                                                1. 5

                                                  I stumbled upon a number that said a GB of data costs about 5 kWh to transfer

                                                  Anyone else think this sounds wildly implausible? The blog in question estimates 2.9 kWh based on 3G, and even that seems absurd imo.

                                                  Per here 2017 IP traffic amounted to 1.5ZB, or about 171,100,000 GB per hour. A 5kWh per GB that would work out to ~7500 TWh, or about 1/3 of 2017’s energy consumption being spent on data transfer alone.

                                                  This would be more energy than we spend on all transportation of people and goods, worldwide combined (about 26% of energy consumption worldwide, per the EIA here).

                                                  It’s hard to directly find data breaking down energy usage by segment to the point where you could directly pin a number on “IP data transfer” by itself (which in and of itself raises questions about where this 5 kWh came from), but just looking at the breakdowns I can find, 5 kWh doesn’t seem to pass the smell test.

                                                  1. 6

                                                    That was my reaction too, but I think the main thing is that this study was old (2007 or so). It was this study, although the link seems to have died very recently.

                                                    I actually just found another study that seems more up to date and seems credible: Electricity Intensity of Internet DataTransmission. Main line (according to me):

                                                    This article derives criteria to identify accurate estimates over time andprovides a new estimate of 0.06 kWh/GB for 2015. By retroactively applying our criteria toexisting studies, we were able to determine that the electricity intensity of data transmission(core and fixed-line access networks) has decreased by half approximately every 2 yearssince 2000 (for developed countries), a rate of change comparable to that found in theefficiency of computing more generally.

                                                    1. 2

                                                      Oh nice, thanks for the link! That makes a lot more sense to me just squinting at power consumption of various parts of the chain.

                                              1. 27

                                                Was anyone else surprised Bram works on Google Calendar?

                                                1. 14

                                                  I’ve been using Vim for almost 20 years and had absolutely no idea (1) what Bram looked like or (2) that he worked for Google.

                                                  1. 3

                                                    Definitely.

                                                    Though I shouldn’t be, it seems like they hired a ton of the previous generation of OSS devs: thinking of things like vim, afl (uncertain, though the repo is under Google’s name now), kismet, etc.

                                                    1. 2

                                                      It’s just not what I would’ve guessed would be the highest and best use of his talents.

                                                      I’m not saying I believed he was working on vim, I know better than that. I’m just surprised it was something so…ordinary and corporate.

                                                    2. 3

                                                      Yes! And that he sounds as if Google is still a start-up and not one of the biggest companies in the world. Had to check the date of the article. Of course it doesn’t feel like a startup, Bram…

                                                      1. 2

                                                        Maybe he means Google Zurich, which seems to have expanded by a lot lately?

                                                      2. 2

                                                        Me, honestly.

                                                      1. 4

                                                        Challenge 7-25 (not necessarily all this week) of adventofcode.com, in Rust.

                                                        1. 3

                                                          I’m a fan of https://usefathom.com. It’s very minimal, but I don’t have many requirements, so it suits my needs pretty well. I like that it’s a single Go binary, which I run on a $5/month Digital Ocean droplet.

                                                          1. 7

                                                            Please note that while this may currently work, it is no longer being maintained and is very different from the paid Fathom Analytics product. I built the initial open-source version of Fathom that you are running, but after I left the project moved to a centralized and closed-source model.

                                                            1. 2

                                                              I wasn’t even aware of this. Thanks for the update.

                                                              1. 2

                                                                Thanks for making that, I use Fathom for my site and like it. Too bad they closed it up.

                                                                1. 1

                                                                  The latest communication is that “We are going to be releasing a new version this year”, FYI

                                                                  1. 2

                                                                    We’ll see.

                                                              1. 3

                                                                Over the years I have tried 3 times to install FreeBSD on different ThinkPads via CD or USB stick. It never even booted. On the same hardware, every Linux distro I ever tried worked pretty much out of the box.

                                                                1. 2

                                                                  Some ThinkPads are known problematic due to a bios bug causing problems with booting GPT-formatted disks, there exists a fix for this which I know that NomadBSD includes out of the box (and it’s USB bootable).

                                                                  https://nomadbsd.org/ - “The GPT layout has been changed to MBR. This prevents problems with Lenovo systems that refuse to boot from GPT if “lenovofix” is not set, and systems that hang on boot if “lenovofix” is set.”

                                                                  1. 1

                                                                    Weird. I certainly don’t share that experience, especially not on Thinkpad’s. Every BSD (or Linux) distribution I throw at it just works, provided I formatted the USB drive correctly (using the instructions from their installation pages).

                                                                  1. 3

                                                                    Pi Hole (network level ad-blocking for all devices in our home network) and Spotify server controlling the speakers in our living room, using raspotify.

                                                                    1. 1

                                                                      How do you turn on/off the speakers? Any recommendations?

                                                                      1. 2

                                                                        Haha yes, that’s the part that could do with some improvement I guess. We just don’t since power draw is very low with speakers on, or we do it manually as the speakers are in an accessible spot. Same with volume control btw, haven’t yet figured out how to control device volume from Spotify on other devices.

                                                                    1. 2

                                                                      But seriously, I love Mailinabox. Been using it since 2016, went through several upgrades (incl. a distro upgrade). Never had a single issue with it.

                                                                      1. 3

                                                                        Hey, nice work. I’ve been following the project a bit after reading your launch post and I like how you are progressing at a steady pace.

                                                                        I’m working on something similar but aimed at somewhat less tech savvy users and solely focussed on WordPress, as I felt that’s where I could get the most “bang for my buck” and make it super easy to hopefully get some people off of Google Analytics. It’s called Koko Analytics, GPL licensed, free, no need for monetization at this point.

                                                                        Also, I’m sorry for ignoring your PR on Fathom for so long as I was also the person that developed the initial open-source version of that. I’m happy GoatCounter sprung into existence though. As most can tell looking at Fathom now, the open-source version of it is pretty much dead at this point. That’s why I wholeheartedly agree with your other comment in this thread:

                                                                        In short, focusing only on business use might make sense if you’re only interested in running a business, but if you want to make the internet a bit better, then the only real option is to offer a Saas for free, at least for personal use. Actually, I don’t even see the point of this entire project without doing this to be honest.

                                                                        Anyway, keep up the good work!

                                                                        1. 1

                                                                          Thanks!

                                                                          The focus isn’t really on “tech savvy users”; ideally, it should be usable for everyone. I admit there’s still some work to be done there though 😅 One step at a time…

                                                                          The PR isn’t a big issue; it was minor and I intentionally didn’t do more to “test the waters” to see how actively maintained the project is. I just wish the current maintainers would be more upfront about Fathom’s status as they’re kind of leading people on with it IMHO. Either way, I’m happy with how things worked out.

                                                                          1. 1

                                                                            I just wish the current maintainers would be more upfront about Fathom’s status as they’re kind of leading people on with it IMHO.

                                                                            Agreed. That’s also perhaps a failure on my part as I should have been more clear in my terms for handing over the project, as I was probably much too generous. I didn’t sell my stake or anything as I was mostly concerned about ensuring the project lives-on. But by project, I meant open-source project mostly, not the direction they went into.

                                                                        1. 4

                                                                          Main Workstation

                                                                          • OS: 64bit Mac OS X 10.13.3c 17D47
                                                                          • Kernel: x86_64 Darwin 17.4.0
                                                                          • Shell: zsh 5.6.2
                                                                          • Resolution: 3440x1440 | 3440x1440 | 1920x1080
                                                                          • CPU: Intel Core i7-7700K @ 4.20GHz
                                                                          • GPU: MSI VGA Graphic Cards RX 580 ARMOR 8G OC
                                                                          • RAM: 64GB
                                                                          • Keyboard: Redragon Kumara with Cherry Reds
                                                                          • Mouse: Corsair Harpoon RGB
                                                                          1. 4

                                                                            Not sure where to look. I get itchy having more than 5 tabs open, let alone that many screens begging for my attention. Kudos to you for being able to handle all that.

                                                                            1. 2

                                                                              I’m with you; most of the time I’m fine hacking on my little MacBook Air, and I generally use things in full screen mode.

                                                                              • Work: MBP 15”
                                                                              • Personal Travel/Bumming Around: MBA (Retina 2019)
                                                                              • Personal Home: MBP 15” (2012)

                                                                              my home machine is the only one with two monitors, and they are:

                                                                              • a 25” that only displays code
                                                                              • the 15” that only displays Slack & Chrome
                                                                            2. 2

                                                                              I’m actually most impressed by the soundproofing setup you have, although I get the impression that it’s meant to kill echos for videoconferencing, rather than mute outside noises, right?

                                                                              1. 1

                                                                                Correct, the setup is meant to kill echos. Soundproofing to mute outside noise is significantly more difficult and costly but I eventually want to get there.