1. 2

    Is IMAP considered the “cleanest” implementation of email possible?

    1. 3

      No, but JMAP probably is.

      IMAP is like a filesystem with poor performance and doesn’t allow batching of commands

      1. 1

        FWIW IMAP also seems to have its own host of layers of cruft

      1. 7

        Neat idea! One question though: How do you handle renewals? In my experience, postgresql (9.x at least) can only re-read the certificate upon a server restart, not upon mere reloads. Therefore, all connections are interrupted when the certificate is changed. With letsencrypt, this will happen more frequently - did you find a way around this?

        1. 5

          If you put nginx in front as a reverse TCP proxy, Postgres won’t need to know about TLS at all and nginx already has fancy reload capability.

          1. 3

            I was thinking about that too - and it made me also wonder whether using OpenResty along with a judicious combination of stream-lua-nginx-module and lua-resty-letsencrypt might let you do the whole thing in nginx, including automatic AOT cert updates as well as fancy reloads, without postgres needing to know anything about it at all (even if some tweaking of resty-letsencrypt might be needed).

            1. 1

              That’s funny I was just talking to someone who was having problems with “reload” not picking up certificates in nginx. Can you confirm nginx doesn’t require a restart?

              1. 1

                Hmm, I wonder if they’re not sending the SIGHUP to the right process. It does work when configured correctly.

            2. 2

              I’ve run into this issue as well with PostgreSQL deployments using an internal CA that did short lived certs.

              Does anyone know if the upstream PostgreSQL devs are aware of the issue?

              1. 19

                This is fixed in PG 10. “This allows SSL to be reconfigured without a server restart, by using pg_ctl reload, SELECT pg_reload_conf(), or sending a SIGHUP signal. However, reloading the SSL configuration does not work if the server’s SSL key requires a passphrase, as there is no way to re-prompt for the passphrase. The original configuration will apply for the life of the postmaster in that case.” from https://www.postgresql.org/docs/current/static/release-10.html

            1. 5

              Someone had my GitHub username or I registered my account a long time ago and didn’t add an email address/lost my password. The account was dormant (no repos). I emailed GitHub and asked to take the username and they gave it to me with no questions asked. I’m quite grateful for this.

              As for the article: by this same logic it seems to me that you should also argue that domain names should be forever too…

              In FreeBSD we heavily use GitHub in the ports tree. We have SHA256 on our distfiles so if someone acquired a previously active account and tried to serve malicious code from the repo it would fail. Several times I have caught projects changing their git tags via the ports tree throwing checksum errors.

              1. 2

                I emailed GitHub and asked to take the username and they gave it to me with no questions asked. I’m quite grateful for this.

                Worth noting we have a set of criteria around account inactivity/there must be no repositories with content/etc. around doing this.

              1. 2

                Isn’t the reasonable solution to enforce a restrictive outbound firewall policy?

                1. 2

                  Indeed. That’s the obvious reasonable solution, but implementing this is predicated on reasonable expectations on things like unfettered access to the Internet via a NAT gateway. A lot of network operators simply won’t go with it, and a lot of systems administrators simply don’t want to maintain a proxy server for those times when you need/want to, e.g., download updates from the Internet.

                  The only institutions I know of presently that do this are banks, and even then, not all banks do it.

                  1. 1

                    I’ve heard of places that use “kiosk” terminals that are shared computers with full access to the internet and all the real computers are air gapped on an isolated network. Updates/downloads/information is sneaker netted to the real network via USB drive or write once CD/DVD. And of course there is “Stallman Net”.

                1. 2

                  I use cryfs for this, it’s a transparent fuse filesystem that maps one folder with plaintext (don’t store this in dropbox) into another folder with a bunch of cyphertext blocks (store this in dropbox).

                  Doesn’t (yet) work on windows, not sure about mobile, but it’s pretty painless.

                  1. 1

                    Although I’d prefer having everything encrypted client-side, this would break all the Dropbox functionality on my phone – hence I went with something in between. Thanks for sharing your interesting setup!

                    1. 0

                      That looks incredibly painful. There’s no way it works on anything but a Linux desktop.

                      1. 3

                        Should work on mac to… but I live my life on linux desktops so that’s good enough for me.

                        Keep in mind that the alternative we are comparing to is “manually click a bunch of buttons to encrypt a PDF for anything you want to keep secure”.

                    1. 1

                      Interesting. Wonder why this isn’t being upstreamed to HAProxy, but maybe there are too many major changes to the codebase for Willy to be comfortable with.

                      I’m a little skeptical of the claims of being faster than Nginx and Varnish without formal benchmarks and specific configurations being listed. It’s not uncommon for someone to do a basic benchmark of Varnish and not have it configured optimally. There are more settings than just the VCL.

                      1. 2

                        Hi, here is the benchmark https://github.com/jiangwenyuan/nuster/wiki/Performance-benchmark:-nuster-vs-nginx-vs-varnish

                        Include hardware, software, system parameters, config files and such. Any parameters tuning suggestion is welcomed.

                        1. 2

                          Can you ensure that you’re using the critbit hashing algorithm as that wasn’t listed and it’s possible the package/distro you’re on still has this set to “classic” by default? Also test against Varnish 5 so all software is at it’s current major release branch?

                          Is the backend webserver sending any Cache-Control headers for this content? Varnish will obey that regardless of your beresp.ttl setting unless you forcibly remove that header. So can you verify that Varnish is getting 100% hits and not hit_for_pass ? Otherwise Varnish is accepting many connections in a burst and then making a single request to the backend. This is far from optimal instead of serving everything from cache instantly.

                          1. 2

                            If you see the check-http-response-header-and-data-size section, you can find out that there’s no cache related headers. I’m sure that requests 100% go to varnish(there’s no log in backend server except initial request).

                            And it is critbit, maybe I should test against varnish 5 :)

                            Do you have any other config suggestions? like pool threads?

                            1. 2

                              When tuning Varnish, think about the expected traffic. The most important thread setting is the number of cache-worker threads. You may configure thread_pool_min and thread_pool_max. These parameters are per thread pool.

                              Although Varnish threading model allows you to use multiple thread pools, we recommend you to do not modify this parameter. Based on our experience and tests, we have seen that 2 thread pools are enough. In other words, the performance of Varnish does not increase when adding more than 2 pools.

                              Defaults in 4.0+ are 100 thread_pool_min (100 free worker threads) and thread_pool_max of 5000 (5000 concurrent worker threads)

                              As this is a synthetic benchmark not replicating real world scenarios, the question is: how many concurrent connections do you want? You could start the service with thread_pool_min of… 5000 and thread_pool_max 10000 so you can instantly be able to handle more responses, if you wanted (at the cost of more memory).

                              Is this a single socket / single cpu (Intel(R) Xeon(R) CPU X5650 @ 2.67GHz) server? If you have multiple physical CPUs you should make sure the process affinity is set to pin it to a single CPU socket or you will have a performance penalty for accessing memory directly connected to the remote CPU socket.

                              1. 1

                                Hi, I’ve test against varnish 4.1.8 and 5.2.1 with and without -p thread_pool_min=5000 -p thread_pool_max=100000 it does not make a difference

                                1. 1

                                  Let me do benchmark again.

                                  It s 12 cores CPU, i tested with 1 core and 12 cores both.

                        1. 13

                          When did the definition of bit rot change? Bit rot is when your storage has bits flip and slowly corrupts, solved by filesystem a like ZFS which checksum the data and can heal/repair the damage automatically.

                          1. 8

                            No, that’s the original definition from pre-ESR Jargon File.

                            bit rot: n. Also {bit decay}. Hypothetical disease the existence of which has been deduced from the observation that unused programs or features will often stop working after sufficient time has passed, even if `nothing has changed’.

                            1. 2

                              I agree, bit rot is corrupt data on disk. I like to use the term software entropy for what this article is talking about.

                              1. 2

                                I agree, the phenomenon described in the linked article is more accurately denoted as “technical debt”.

                                1. 4

                                  I don’t think tech debt is the right description. Even a very well constructed program needs maintenance to keep up with the changing APIs and systems its dependencies run on. This is just software maintenance.

                                  1. 3

                                    I agree with you. Technical debt is better applied to decisions during the design and implementation phase coming back to haunt you (in my opinion).

                                    But “bit rot” is definitely incorrect in this context!

                              1. 2

                                There’s an option in Strava to exclude your data from the heatmaps. Someone should inform the military

                                1. 2

                                  I don’t know enough about Linux namespaces. Couldn’t this just be handled with chroot?

                                  1. 2

                                    I don’t know what you mean by “couldn’t this just be handled with chroot” (what solution are you suggesting exactly?), but mount namespaces are not the same as chroot! here are a couple of articles that might be helpful: http://man7.org/linux/man-pages/man7/mount_namespaces.7.html, https://lwn.net/Articles/689856/

                                    1. 1

                                      You can’t access the cgroup namespaces from the host system? That’s crazy. This is such a crucial feature of jails and zones.

                                  1. 2

                                    This is actually a good thing, because it offers plausible deniability for future Fappening-style leaks.

                                    1. 2

                                      Wait until you see what happens next election when another liberal woman is running for the Democrats. It won’t be pretty.

                                      1. 3

                                        Wait until you see what happens next election when another liberal woman is running for the Democrats. It won’t be pretty.

                                        The whole point is that if fakes are undistinguishable from real footage, video no longer matters. If you’re bothered by people masturbating to falsified videos, you have bigger problems than those that can be handled by reasoning.

                                        1. 3

                                          I agree, this cuts both ways. Any person “caught” in an actual documented embarrassing position can plausibly claim the footage was generated by a malicious party.

                                          In the end, this will probably create a market for cryptographically secured cameras, like some still cameras used for forensics.

                                          But there will be a lot of turmoil before this all shakes out.

                                          1. 3

                                            this will probably create a market for cryptographically secured cameras, like some still cameras used for forensics

                                            I think this is going to have to happen for all video and camera devices and content over time. Otherwise the whole notion of video or photographic “evidence” is going to go out the window, along with all the law and precedent that’s been built up on it for decades, casting us completely adrift in a sea of post-truth.

                                    1. 6

                                      I also run my own DNS server, but I prefer to maintain just the master. I pay ~$15/yr to outsource the slaves to a third party company who specializes in such things, and I don’t have to worry as much if my VPS provider decides to go down for a few hours, etc. I get a more reliable DNS system, and I still get to maintain control, graph statistics, etc, to my heart’s content.

                                      Glad to see the discipline of self-hosting isn’t completely going the way of the dodo in this day and age!

                                      1. 2

                                        Any recommendation for a good third part company for such outsourcing?

                                        I also run my own DNS. The main reason is that I run my own mail using https://mailinabox.email/, which has been a reasonably simple and pain-free experience. Paying someone to get better stability could be interesting.

                                        1. 3

                                          I have added nameservers from BuddyNS to my secondary DNS. For the moment I’m just using their free plan since I’ve delegated to only one nameservers out of the 3 which are serving my zones, and the query count is low enough to keep me on the free plan.

                                          1. 1

                                            I loved BuddyNS but I went over their query limit and the only payment they accept is PayPal and I boycott PayPal after they stole $900 from me… I wish they would take other forms of payment

                                          2. 3

                                            I asked for some recommendations online. My biggest requirements were a ‘slave only’ offering, DNSSEC/IPv6 support, and ‘not Dyn’ (I just can’t give Oracle money these days). With all that in mind, I ended up choosing dnsmadesimple.com (edit: looks like they’re $30/yr, not $15 as above. Mea culpa) It was seriously easy to get everything set up (less than 20 minutes!) and now I don’t have to worry about what happens when my master goes down.

                                            1. 1

                                              Do you mean dnsmadeeasy.com or do you mean dnsimple.com?

                                              dnsmadesimple.com doesn’t exist

                                              1. 2

                                                My deepest apologies, this is what I get for Internetting when I’m about four cups of coffee short.

                                                dnsmadeasy.com is the correct one.

                                            2. 3

                                              Hello everyone! This is my first post. :)

                                              I’m Vitalie from LuaDNS. We don’t offer slaves right now (only AXFR transfers), but if you don’t mind to fiddle with git, you can add your Bind files to a git repository and push them to us via GitHub/Bitbucket/YourRepo. You can keep using your DNS servers for redundancy as slaves.

                                              You get backups via git and free Anycast DNS for 3 zones. :)

                                              Shameless Plug

                                            3. 1

                                              Interesting - that’s not a bad idea.

                                              If I were a corp I wouldn’t want this method, but for the single user, the investment has been well worth the pay-off - even if I decide to go with a vendor in future, I’ll understand what I’m paying for.

                                            1. 1

                                              Not sure why they don’t have a paranoid security mode setting. With Matrix E2E you have to accept the keys of everyone and every device they’re chatting with and if someone new or a new device joins a group chat you can’t even send a message without a warning and accepting the keys of the new devices/users.

                                              I know they want to be user friendly and this is above the average person’s head but that’s the trade off. At least let advanced users have more security.

                                              1. 5

                                                I am not going to buy intel s̵t̵u̵f̵f̵ shit again. I have been talking to my friends and relatives about it too. This is just too bad for any company to prevail.

                                                1. 2

                                                  That’s a poor attitude to have. Every CPU has bugs. No modern fast CPU is going to be fully immune to all of these bugs. We made mistakes in the design of modern CPUs because we wanted speed.

                                                  1. 5

                                                    Poor attitude ? My comment, much like the post, refers to the company. This is a big corp with a highly hierarchical structure and heavily thorough process in making their products reach market. Intel was doing it on purpose, make no mistake. Judge poor attitude where it is at.

                                                    “Some VW executives probably wish a problem with their brake controller software has been discovered at the same time,”

                                                  2. 1

                                                    Me too. Intel really needs a credible competitor.

                                                    I delayed all my hardware purchases in the hope that the next CPU designs address these and similar bugs, that the mitigations for Rowhammer work reliably in RAM chips and that the availability of GPUs improves. After that AMD will get my money.

                                                  1. 1

                                                    ugghhhhhhh this is painful.

                                                    I have a ton of VMs at work on Intel(R) Xeon(R) CPU E5-2699 v3 CPUs. None of the guests show PCID support.

                                                    VMWare doesn’t have a good way to show CPU flags so you have to ssh in and run this:

                                                    $ vim-cmd hostsvc/hosthardware
                                                    
                                                    ...
                                                    
                                                    cpuFeature = (vim.host.CpuIdInfo) [
                                                    (vim.host.CpuIdInfo) {
                                                    level = 0, 
                                                    vendor = <unset>, 
                                                    eax = "0000:0000:0000:0000:0000:0000:0000:1111", 
                                                    ebx = "0111:0101:0110:1110:0110:0101:0100:0111", 
                                                    ecx = "0110:1100:0110:0101:0111:0100:0110:1110", 
                                                    edx = "0100:1001:0110:0101:0110:1110:0110:1001"
                                                    }, 
                                                    
                                                    

                                                    This site (http://www.felixcloutier.com/x86/CPUID.html) says PCID is the 17th register of ecx

                                                    It’s a zero. So no support? Greaaaaaaaaaaaaat.

                                                    1. 8

                                                      I appreciate the move, but “we’re paying wages based on a place and we found out it’s kind of arbitrary, so we now pay wages based on another, even more arbitrary place” is a weird argument.

                                                      1. 25

                                                        Not more than “we want to pay you less because you currently live in a cheaper place” as if any company has any business dictating what my appropriate level of living standard should be.

                                                        I somewhat disagree with San Francisco being another arbitrary place. It is probably the most expensive city with significant number of well paid developers which seemed to be the reason why they picked it.

                                                        1. 7

                                                          They spend quite a bit of the blog post arguing that picking a place for a distributed company is a little arbitrary. Then they pick another place. They could have just placed themselves on the wage scale at the price where they want to be.

                                                          That’s independent on why San Francisco wages are high. It’s just as much a place as Chicago is.

                                                          To make it clear: the argument amuses me, nothing more, nothing less.

                                                          I’m fully on board with the whole “wage depends on where you live, not the value you bring” stuff being completely off, I think the freedom to chose a different place of living also for financial reasons is important. Everyone talking “my employees should think business” and then pulling stuff like this is not practicing what they preach.

                                                          1. 6

                                                            We were sold the line that in the Real World, one’s salary is reflective of the value they bring to the company.

                                                            Then remote work enters the picture, along with the opportunity for employees to take part in arbitrage, and the line suddenly changes to talk about standard of living and other nonsense. It struck me as odd how quickly the Real World changed once employees had the potential for an upside.

                                                          2. 9

                                                            I don’t think they’ve now chosen an arbitrary place. Remote work is steadily gaining in popularity. Bay area companies pay the most, and make their salaries increasingly available (or within 10%) to remote devs. Basecamp is not picking a city out of a hat, they’re putting themselves at the top of the American market they’re competing in. It used to be that the market rate for remote work included a location adjustment, but the market is moving. (Moving slowly and incompletely, of course, as wages are illiquid and sticky.)

                                                            1. 1

                                                              I would expect to see compensation regress towards the mean in a national or international labor market. If the supply of labor changes without a change in the demand, wages should decrease.

                                                              1. 2

                                                                There’s a bunch of factors and I tried not to nerd-snipe myself. I’d predict that on the balance that there’s enough increasing demand to pull up salaries outside of the bay area, but I didn’t run the numbers.

                                                                1. 1

                                                                  Great list of factors in the third tweet.

                                                                2. 1

                                                                  Sure - but this isn’t “the market”, it’s a founder-controlled company.

                                                                  The decisions are informed by the market, but not controlled by it.

                                                                  1. 1

                                                                    I would expect to see a decrease in compensation not because the market controls market actors but because free-ish markets tend towards economic equilibrium. I wasn’t referring directly to the actions taken by Basecamp but instead to “…the market is moving” in the parent.

                                                              2. 4

                                                                I’m with you. It’s nonsense trading place for place. I’ll add they have the better method built right into this article. Let me put it simply:

                                                                Goal: Pay workers really well.

                                                                Problem: Industry pays based on location. Capitalists also try to push wages down.

                                                                Solution: Look at pay ranges in IT, identify a pay rate for various job positions that meets their goal for baseline, and set starting pay for those positions to those points.

                                                                Done! Didn’t even need anything about location to pick a nice pay rate for programmers. Just need the national (or global) statistics on pay. They already did this by picking a number in the high end. They can finish by dropping the location part. Result is still the same.

                                                                Personally, though, I find the approaches that combine cost of living with a base pay for a position to be interesting. Example here. They may be more fair depending on how one looks at it in terms of what people are keeping after basic bills in exchange for their work. I’m far from decided on that topic. Far most businesses’ goals, getting talent in areas with lower cost of living will let them invest that savings back into their business. That can be a competitive advantage with more people getting stuff done or better tools for those they have. If not needing more programmers, quite a bit of QA, deployment, and marketing goods can be bought for savings of a few programmers in cheaper areas versus expensive ones.

                                                                1. 1

                                                                  Goal: Pay workers really well.

                                                                  I don’t think this is any real goal. The goal is more likely boost reputation and attract the best works.

                                                                  Goal: Happy (productive) and skilled workers.

                                                                  Actually, even then I don’t think it is right, if a company could operate effectively without staff it would.

                                                                  1. 2

                                                                    Their workers were already happy and skilled. Certainly a top priority for them. Although, the author writes as if having core principles about business on top of that. Putting their beliefs in practice to set an example is also a goal.

                                                                    I’m just using pay because it’s an objective value that can be measured. They wanted that value to go up. I proposed a different method to make it go up.

                                                                2. 3

                                                                  If they don’t use SF as their template they miss out on anyone living there as a potential employee as they’ve priced themselves out

                                                                  1. 3

                                                                    Honestly, Basecamp doesn’t feel like the company to me that would actually care that much about that. They’ve managed to be highly successful without.

                                                                    1. 1

                                                                      Really? Basecamp is all about making the best product possible. It’s not about SF per se; SF just happens to be the top of the market for developer pay. They explain in the article:

                                                                      But in what other part of the business do we look at what we can merely get away with? Are we trying to make the bare minimum of a product we can get away selling to customers? Are we looking to do the bare minimum of a job marketing our business? No.

                                                                      Do better than what you can get away with. Do more than the bare minimum. Don’t wait for the pressure to build. Don’t wait for the requests to mount. The best time to take a step forward is right now.

                                                                      1. 2

                                                                        I read the article. But if your point is “top of the market”, just say “top of the market” and be done with it.

                                                                        IMHO, Basecamp is pretty good at giving their employees a fair share of their successes, and that’s fine. SF or not.

                                                                  2. 2

                                                                    I believe the logic here was “the place distinction is arbitrary, so we’ll take the most expensive place so that people can go anywhere with ease”

                                                                  1. 0

                                                                    You don’t want to manually write makefiles. Use Autotools instead: https://autotools.io/index.html

                                                                    1. 14

                                                                      Why not, its completely fine to write “simple” makefiles for “simple” projects. I think the musl makefile is a good example for a not so simple but still simple makefile.

                                                                      To me autotools generated makefiles are a hell to debug, just slightly less hellish than debugging cmake.

                                                                      1. 5

                                                                        The musl makefile is one of the cleanest production makefiles I’ve ever seen. But I note even its opening comment says, “This is how simple every makefile should be… No, I take that back - actually most should be less than half this size.”

                                                                        I count, at least, 3 languages used:

                                                                        1. GNU dialect of make
                                                                        2. shell
                                                                        3. sed

                                                                        And hacks like this:

                                                                        obj/musl-gcc: config.mak
                                                                        	printf '#!/bin/sh\nexec "$${REALGCC:-$(WRAPCC_GCC)}" "$$@" -specs "%s/musl-gcc.specs"\n' "$(libdir)" > $@
                                                                        	chmod +x $@
                                                                        
                                                                        obj/%-clang: $(srcdir)/tools/%-clang.in config.mak
                                                                        	sed -e 's!@CC@!$(WRAPCC_CLANG)!g' -e 's!@PREFIX@!$(prefix)!g' -e 's!@INCDIR@!$(includedir)!g' -e 's!@LIBDIR@!$(libdir)!g' -e 's!@LDSO@!$(LDSO_PATHNAME)!g' $< > $@
                                                                        	chmod +x $@
                                                                        

                                                                        Local legend @andyc of Oil Shell fame pushes the idea that Shell, Awk, and Make Should Be Combined. IMHO, the musl example is persuasive empirical evidence for his position.

                                                                        (I’m hoping @stefantalpalaru is being sarcastic about Autotools and we’re all falling to Poe’s Law.)

                                                                        1. 2

                                                                          (I’m hoping @stefantalpalaru is being sarcastic about Autotools and we’re all falling to Poe’s Law.)

                                                                          No, I’m not. I’ve worked with hand written Makefiles, Autotools and CMake on complex projects and I honestly think that Autotools is the lesser of all evils.

                                                                          Local legend @andyc of Oil Shell fame

                                                                          Now I hope you’re the one being sarcastic. Who exactly uses the Oil Shell?

                                                                          1. 3

                                                                            Now I hope you’re the one being sarcastic.

                                                                            I was not being sarcastic. @andyc’s Oil Shell posts consistently do well in voting here.

                                                                            Who exactly uses the Oil Shell?

                                                                            Who uses a less than a year old shell that explicitly isn’t for public use yet? I’m hoping very few people.

                                                                            What does the number of Oil Shell users have to do with his argument?

                                                                      2. 7

                                                                        Err. Last time I touched autotools it was a crawling horror.

                                                                        Makefiles are fine. Just don’t write ones that call other makefiles (recursive make considered harmful and all that).

                                                                        1. 3

                                                                          Just don’t write ones that call other makefiles (recursive make considered harmful and all that).

                                                                          Clearly someone needs to write “Make: The Good Parts” 😂

                                                                          1. 2

                                                                            Isn’t non-recursive make also considered harmful?

                                                                            1. 2

                                                                              I think the ideal is a makefile that includes parts from all over your source tree, so that there’s one virtual Makefile (no recursive make invocation O(n^2) issues) but changes can be made locally. Best of both worlds!

                                                                          2. 5

                                                                            I’m no expert on Make or Autotools, but my response to building automation on top of an automation tool is: “Please, god, no!”

                                                                            If your automation tool lacks the expressiveness to automate the problem your solving, the solution should never be to bolt another automation tool on top of it. It’s time to start over.

                                                                            1. 2

                                                                              I’m going to take a guess that you’re not a DevOps engineer.

                                                                              1. 1

                                                                                “Lets just use another layer of automation!” is the DevOps version of solving every problem with another layer of indirection clearly! :)

                                                                              2. 1

                                                                                But why not? One tool (cmake, meson) checks dependencies and works on high-level concepts such as “binary”, “library”. Other tool (make, ninja) is low-level and operates on building individual files. It’s sane separation of concerns.

                                                                                Make, however, tries to be high-level (at least GNU make, it even has built-in Scheme), but not enough high-level and it might be better at low level (that’s why ninja was invented).

                                                                                Monolith build tools like Bazel might better handle invalidation but this class of tools is not explored enough (existing tools are designed for specific Google/Facebook’s use cases).

                                                                              3. [Comment removed by author]

                                                                                1. 3

                                                                                  Autotools is terrible.

                                                                                  Yes, but all the alternatives are worse.

                                                                                2. 1

                                                                                  Disagree

                                                                                  https://varnish-cache.org/docs/2.1/phk/autocrap.html

                                                                                  And look how complicated this configure script is which doesn’t take 60 seconds to run:

                                                                                  https://github.com/bsdphk/Ntimed/blob/master/configure

                                                                                1. 3

                                                                                  I’m very much confused with why this has become such a big deal. It seem like under clocking a device with faulty battery to extend the use of the devise is kind of a no-brainer.

                                                                                  1. 3

                                                                                    They shouldn’t hide this information from the user. Give me a warning or alert. Let me know I can restore peak performance by purchasing a new battery. The way Apple hides battery diagnostic info is crazy.

                                                                                  1. 3

                                                                                    This is just bloody ridiculous! Why would any org not have a purchasing requirement that prohibits purchasing anything that has a prohibition of doing performance testing? Especially in the government settings, where things are supposed to be up to the public disclosure through Freedom Of Information Act and the like.

                                                                                    Can you imagine going to the restaurant where as a condition of being serviced you agreed not to write Yelp reviews?! How could Oracle not only survive, but thrive with such poor numbers, and an explicit acknowledgement from legal that they do know they probably suck against the competition?! Unbelievable!

                                                                                    P.S. Which cloud provider has this clause?! Asking for a friend.

                                                                                    1. 2

                                                                                      Oracle probably had a better response time for the C-level exec to come to your office and grovel for forgiveness when the software breaks.

                                                                                    1. 11

                                                                                      This is also why when Wisconsin Circuit Courts moved from “unnamed commercial database” to Postgres, Kevin Grittner couldn’t say what the previous poorly-performing software was. Although as it was gov’t I assume it would have been easy to get the data on who was being paid for licensing.

                                                                                      https://www.postgresql.org/message-id/46D70B5C.EE98.0025.0%40wicourts.gov

                                                                                      1. 2

                                                                                        A presentation that Grittner gave a few years later suggests that it was Sybase: http://www.pgcon.org/2009/schedule/events/129.en.html

                                                                                      1. 3

                                                                                        @feld@bikeshed.party - my own instance running on Mastodon-compatible Pleroma. Any BSD folk wanting to bikeshed with me in local are welcome to register. :-)

                                                                                        1. 1

                                                                                          No remote-follow? It looks like I need to create an account to try to follow you at https://bikeshed.party/@feld.

                                                                                          1. 1

                                                                                            Of course you can remote follow. You don’t need to click a Mastodon “remote follow” button to follow people in the Fediverse. Just search for me in your client: @feld@bikeshed.party

                                                                                            1. 1

                                                                                              Yeah I eventually figured that out :) o/