1. 1

    often if i read about something invented in linuxland, i have to think about plan9, and how good some concepts would fit todays distributed computing.

    1. 3

      How does that apply here? How did Plan9 do it differently?

      (FWIW, X is not a Linux invention.)

      1. 1

        well, you can compose your environment: take the filesystem from one system, the processing power of another and something else to use as terminal handling input/output, all by using a rather simple network protocol.

    1. 6

      Electricity usage is a huge concern even within the cryptocurrency community. There is a lot of work going towards more energy efficient solutions. However, proof-of-work is still the defacto method. At Merit we still use PoW but I chose a memory-bound algorithm called Cuckoo Cycle which is more energy efficient since it’s memory bandwidth bound. I hope to move away from proof-of-work completely in the future, but it’s not easy to get the same properties. Since in some ways, Merit is half PoW and PoS (Proof-of-Stake) via our Proof-of-Growth (PoG) algorithm, we are already halfway there.

      Proof-of-Work is fascinating because it’s philosophically the opposite of Fiat money. Fiat money is one of the few things in the world where you can expend less effort and produce more of it. Cryptoccurrencies with PoW are the opposite, where you produce fewer of it in the proportion of effort expended.

      1. 2

        How much more memory efficient is Merit (on the scale of the top 100 countries electricity consumption)?

        The article points out that ASIC miners have found ways of solving algorithms that have previously been thought to be resistant to a bespoke hardware solution.

        Consuming the same amount of electricity as a large’ish country is certainly fascinating.

        1. 4

          Warning! this will be a bummer reply, nothing I will say here will be uplifting…..

          Notice of course, the difference between the #1 country, and the #2 country is large. It likely follows zipf’s law. The issue with ASICs is that they are not accessible to acquire and therefore insiders get access to them first and have a huge advantage. It’s anathema to the goal of having anyone download the software and mine.

          In the scheme of things, the amount of electricity used to mine cryptocurrencies pales in comparison to the amount of electricity wasted on countless other things. We should just acknowledge that there is something fundamentally wrong with the global economic system that allows for gross externalities that aren’t accounted for. And that there is such a gross disparity of wealth where some countries have such excess capacity for electricity while others struggle with brownouts and blackouts every day.

          Global warming itself is an incredibly complex problem. Using a slow scripting language for your software? How much hardware are you wasting running at scale? Buying a Tesla? Too bad your electricity is likely dirty, and the production caused 5 years worth of CO2 a normal car puts out. Switching to solar and wind? Too bad the air will be cleaner causing more sunlight to hit the earth heating it up faster because even stopping now, we have decades of warming built in, and that a cleaner atmosphere accelerates that warming.

          Global warming is such an insanely difficult, complex, and urgent problem that we are missing the forest for the trees.

          Cryptocurrencies are not tackling the problem of Global Warming, but so aren’t most technologies we are creating every day. I would love to hear how many people on Lobsters are tackling global warming head on? I suspect almost zero. And isn’t that just the most depressing thing? It is for me, I think about this every day when I look at my children.

          EDIT: Holy poop I was right, totally zipf’s law https://en.wikipedia.org/wiki/List_of_countries_by_electricity_consumption .

          1. 9

            NB: this may be ranty ;)

            In the scheme of things, the amount of electricity used to mine cryptocurrencies pales in comparison to the amount of electricity wasted on countless other things.

            how about not doing things which have currently no value for society, except from being an item for financial speculation, and burning resources. that would be a start. i still have to see a valid application of cryptocurrencies which really works. hard cash is still a good thing which works. it’s like voting machines: they may kinda work, but crosses made with a pen on paper are still the best solution.

            the electricity wasted on other things is due to shitty standby mechanisms and lazyness. these things can be fixed. the “currency” part of “cryptocurrency” is to waste ressources, which can’t be fixed.

            Global warming itself is an incredibly complex problem.

            so-so.

            Using a slow scripting language for your software? How much hardware are you wasting running at scale?

            see the fixing part above. fortunately most technology tends to get more efficient the longer it exists.

            Buying a Tesla? Too bad your electricity is likely dirty, and the production caused 5 years worth of CO2 a normal car puts out.

            yeah, well, don’t buy cars from someone who shoots cars into orbit.

            Switching to solar and wind? Too bad the air will be cleaner causing more sunlight to hit the earth heating it up faster because even stopping now, we have decades of warming built in, and that a cleaner atmosphere accelerates that warming.

            the dimming and warming are two seperate effects, though both are caused by burning things. cooling is caused by particles, while warming is caused by gases (CO2, CH4, …). there are some special cases like soot in the (ant)arctic ice, speeding up the melting. (cf. https://en.wikipedia.org/wiki/Global_cooling#Physical_mechanisms , https://en.wikipedia.org/wiki/Global_warming#Initial_causes_of_temperature_changes_(external_forcings) )

            Cryptocurrencies are not tackling the problem of Global Warming, but so aren’t most technologies we are creating every day. I would love to hear how many people on Lobsters are tackling global warming head on? I suspect almost zero. And isn’t that just the most depressing thing? It is for me, I think about this every day when I look at my children.

            as global warming doesn’t has a single cause, there isn’t much to do head on. with everything theres a spectrum here. some ideas which will help:

            • don’t fly (less CO2).
            • buy local food when possible, not fruit from around the globe in midwinter. don’t eat much meat (less CO2, CH4, N2O).
            • use electricity from renewable sources (less CO2).

            those things would really help if done on a larger scale, and aren’t too hard.

            1. 2

              how about not doing things which have currently no value for society, except from being an item for financial speculation, and burning resources. that would be a start. i still have to see a valid application of cryptocurrencies which really works.

              Buying illegal goods through the internet without the risk of getting caught by the financial transaction (Monero and probably Bitcoin with coin tumblers).

              1. 4

                mind that i’ve written society: a valid reason are drugs, which shouldn’t be illegal but be sold by reliable, quality controlled suppliers. i think other illegal things are illegal for a reason. additionally, i’d argue it’s risky to mail-order illegal things to your doorstep.

                1. 2

                  cryptocurrencies solve a much harder problem than hard cash, which is they have lowered the cost of producing non-state money. Non-state money has existed for thousands of years, but this is the first time in history you can trade globally with it. While the US dollar may be accepted almost everywhere, this is not true for other forms of cash.

                  1. 4

                    but what is the real use case?

                    • if globalized trade continues to exist, so will the classic ways of payment. cryptocurrencies are only useful in this case if you want to do illegal things. there may be a use case in oppressed countries, but the people tend to have other problems there than to buy things somewhere in the world.

                    • if it ceases to exist, one doesn’t need a cryptocurrency to trade anywhere in the world, as there is no trade.

                    i’m not a huge fan of the current state of the banking system, but it is a rather deep local optimum. it bugs me that i have to pay transaction fees, but thats the case with cryptocurrencies, too. i just think that while theoretically elegant, cryptocurrencies do more harm than good.

                    anecdote: years ago, i payed for a shell account by putting money in an envelope and sending it via mail ;)

                    1. 2

                      Cryptocurrencies are a transvestment from centralized tech to decentralized. It’s not what they do, but how they do it that’s different. It’s a technology that allows the private sector to invest in decentralized tech, where in the past they had no incentive to do so. Since the governments of the world have failed so miserably to invest in decentralized technology in the last 20 years, this is the first time that I can remember where the private sector can contribute to building decentralized technology. Note cryptocurrencies are behind investments of decentralized storage, processing, and other solutions, where before the blockchain, they would have been charity cases.

                      The question you can ask is, why not just stick with centralized solutions? I think the argument is a moral one and about power to the people, vs to some unaccountable 3rd party.

                      1. 1

                        It’s a technology that allows the private sector to invest in decentralized tech, where in the past they had no incentive to do so.

                        i still don’t see exactly where the cryptocurrencies are required for investment in decentralized technology. we have many classic systems which are decentralized: internet (phone before that), electricity grid, water supply, roads, etc. why are cryptocurrencies required for “modern” decentralized systems? it just takes multiple parties who decide that it is a good solution to run a distributed service (like e-mail). how it is paid for is a different problem. one interesting aspect is that the functionality can be tightly coupled with payments in blockchainy systems. i’m not convinced if that is reason enough to use it. furthermore some things can’t be well done due to the CAP theorem. so centralization is the only solution in these cases.

                        Note cryptocurrencies are behind investments of decentralized storage, processing, and other solutions, where before the blockchain, they would have been charity cases.

                        I’d say that the internet needs more of the “i run it because i can, not because i can make money with it” spirit again.

                        1. 1

                          i still don’t see exactly where the cryptocurrencies are required for investment in decentralized technology.

                          You are absolutely right! It isn’t a requirement. I love this subject by the way, so let me explain why you are right.

                          we have many classic systems which are decentralized: internet (phone before that), electricity grid, water supply, roads, etc. why are cryptocurrencies required for “modern” decentralized systems

                          You are absolutely right here. In the past, our decentralized systems were developed and paid for by the public sector. The private sector, until now, failed to create decentralized systems. The reason we need cryptocurrencies for modern decentralized systems is that we don’t have the political capital to create and fund them in the public sector anymore.

                          If we had a functioning global democracy, we could probably create may systems that “i run it because i can, not because i can make money with it”.

                          That spirit died during the great privatization of computing in the mid 80s, and the privatization of the internet in the mid 90s.

              2. 2

                I love rants :-) Let’s go!

                “currency” part of “cryptocurrency” is to waste ressources, which can’t be fixed.

                Some people value non-state globally tradeable currencies. Google alone claims to have generated $238 billion in economic activity from their ads and search. https://economicimpact.google.com/ . The question is, how much CO2 did that economic activity create? Likely far greater than all cryptocurrencies combined. But that’s just my guess. It’s not an excuse, I’m just pointing out we are missing the forest for the trees. People follow the money, just as google engineers work for google because the money is there from ads, many people are working on cryptocurrencies because the money is there.

                see the fixing part above. fortunately most technology tends to get more efficient the longer it exists.

                While true, since our profession loves pop-culture, most technologies are replaced with more fashionable and inefficient ones the longer they exist. Remember when C people were claiming C++ was slow? I do.

                the dimming and warming are two separate effects, though both are caused by burning things.

                They are separate effects that have a complex relationship with our models of the earth warming. Unfortunately, even most well-meaning climate advocates don’t acknowledge dimming and that it’s not as simple as changing to renewable resources since renewables do not cause dimming, and god knows we need the dimming.

                those things would really help if done on a larger scale and aren’t too hard.

                Here is my honest opinion, we should have done this 30 years ago when it wasn’t too late. I was a child 30 years ago. The previous generation gave me this predicament on a silver plate. I do my part, I don’t eat meat because of global warming, I rarely use cars, use public transport as much as possible. Work from home as much as possible. etc, etc,

                But I do these things knowing it’s too late. Even if we stopped dumping CO2 in the atmosphere today, we have decades of warming built in that will likely irreparably change our habitat. Even the IPCC assumes we will geoengineer our way with some magical unicorn technology that hasn’t been created yet.

                I do my part not because I think they will help, but because I want to be able to look at my children and at least say I tried.

                I think one of my next software projects will be helping migrants safely travel, because of one of the biggest tragedies and sources of human suffering as a result of climate change has been the refugee crisis, which is going to increase more.

                1. 2

                  Some people value non-state globally tradeable currencies. Google alone claims to have generated $238 billion in economic activity from their ads and search. https://economicimpact.google.com/ . The question is, how much CO2 did that economic activity create? Likely far greater than all cryptocurrencies combined. But that’s just my guess. It’s not an excuse, I’m just pointing out we are missing the forest for the trees. People follow the money, just as google engineers work for google because the money is there from ads, many people are working on cryptocurrencies because the money is there.

                  i won’t refute that ads are a waste of resources, i just don’t see why more resources need to be wasted on things which have no use except for speculation. i hope we can do better.

                  While true, since our profession loves pop-culture, most technologies are replaced with more fashionable and inefficient ones the longer they exist. Remember when C people were claiming C++ was slow? I do.

                  Javascript has gotten more efficient in the order of magnitudes. Hardware is still getting more efficient. There is always room for improvement. As you’ve written, people go where the money is (or can be saved).

                  They are separate effects that have a complex relationship with our models of the earth warming. Unfortunately, even most well-meaning climate advocates don’t acknowledge dimming and that it’s not as simple as changing to renewable resources since renewables do not cause dimming, and god knows we need the dimming.

                  But I do these things knowing it’s too late. Even if we stopped dumping CO2 in the atmosphere today, we have decades of warming built in that will likely irreparably change our habitat.

                  Dimming has an effect. As reason not to switch to renewable energy it isn’t a good argument. Stopping to pump more greenhouse gasses would be a good start, they tend to be consumed by plants.

                  […] we will geoengineer our way with some magical unicorn technology that hasn’t been created yet.

                  lets not do this, humans have a tendency to make things worse that way ;)

                  1. 1

                    i hope we can do better.

                    I don’t think our economic system is setup for that.

                    Javascript has gotten more efficient in the order of magnitudes. Hardware is still getting more efficient. There is always room for improvement. As you’ve written, people go where the money is (or can be saved).

                    I think because moore’s law is now dead, things are starting to swing back towards efficiency. I hope this trend continues.

                    Dimming has an effect. As reason not to switch to renewable energy it isn’t a good argument. Stopping to pump more greenhouse gasses would be a good start, they tend to be consumed by plants.

                    I didn’t provide dimming as a reason not to switch to renewables, I provided it because JUST switching to renewables will doom us. As I’ve said, there are decades of warming backed in, there is a lag with the CO2 we already put in. Yes, we need to stop putting more in, but it’s not enough to just stop. And in fact, stopping and not doing anything else will doom us faster.

                    lets not do this, humans have a tendency to make things worse that way ;)

                    I totally agree. I don’t want countries to start launching nuclear weapons, for example. The only realistic thing that could possibly work is to do massive planting of trees, like I mean billions of trees need to be planted. And time is running out, because photosynthesis stops working at a certain temperature, so many places are already impossible to fix (iraq for example, which used to be covered in thick forests thousands of years ago).

                    1. 1

                      I don’t think our economic system is setup for that.

                      aren’t we the system? changes can begin small, just many attempts fail early i suppose.

                      And in fact, stopping and not doing anything else will doom us faster.

                      do you have any sources for that?

                      The only realistic thing that could possibly work is to do massive planting of trees, like I mean billions of trees need to be planted. And time is running out, because photosynthesis stops working at a certain temperature, so many places are already impossible to fix (iraq for example, which used to be covered in thick forests thousands of years ago).

                      well, if the trends continues, greenland will have some ice-free space for trees ;) just stopping deforestation would be a good start though.

                      1. 1

                        aren’t we the system?

                        We did not create the system, we were born into it. To most, they see it as reality vs a system that’s designed.

                        do you have any sources for that?

                        https://www.sciencedaily.com/releases/2017/07/170731114534.htm

                        well, if the trends continues, greenland will have some ice-free space for trees ;) just stopping deforestation would be a good start though.

                        Sorry if I’m wrong, but do I sense a bit of skepticism about the dangers we face ahead?

              3. 5

                That was such a non-answer full of red herrings. He wanted to know what your cryptocurrency’s electrical consumption is. It’s positioned as an alternative to centralized methods like Bitcoin is. The centralized methods running on strongly-consistent DB’s currently do an insane volume of transactions on cheap machines that can be clustered globally if necessary. My approach is centralized setup with multiple parties involved checking each other. Kind of similar to how multinational finance already works but with more specific, open protocols to improve on it. That just adds a few more computers for each party… individual, company, or country… that is involved in the process. I saw a diesel generator at Costco for $999 that could cover the energy requirements of a multi-national setup of my system that outperforms all crypto-currency setups.

                So, what’s the energy usage of your system, can I participate without exploding my electric bill at home (or generator), and, if not, what’s the justification of using that cryptosystem instead of improving on the centralized-with-checking methods multinationals are using right now that work despite malicious parties?

                1. 3

                  How much more memory efficient is Merit (on the scale of the top 100 countries electricity consumption)?

                  Sorry, That’s his question. I can answer that easily, it’s not on that scale. My interpretation of that question was that he was making a joke, which is why I didn’t answer it. If derek-jones was serious about that question, I apologize.

                  As I mentioned, the algorithm is memory bandwidth bound, I’m seeing half the energy cost on my rig, but I need to do more stringent measurements.

                  1. 1

                    More of a pointed remark than a joke. But your reply was full of red herrings to quote nickpsecurity.

                    If I am sufficiently well financed that I can consume 10M watt of power, then I will always consume 10M watt. If somebody produces more efficient hashing hardware/software, I will use it to generate more profit, not reduce electricity consumption. Any system that contains a PoW component creates a pushes people to consume as much electricity as they can afford.

                    1. 1

                      If somebody produces more efficient hashing hardware/software, I will use it to generate more profit, not reduce electricity consumption.

                      This is true for any resource and any technology in our global economic system.

                      I wasn’t trying to reply with red herrings, but to expand the conversation. It’s really interesting that people attack cryptocurrencies for wasting electricity when there is a bigger elephant in the room nobody seems to want to talk about. Everyone knows who butters their bread. Keep in mind I’m not defending wasting electricity, but focusing on electricity is like, to use a computer analogy, focussing only on memory and creating garbage collection to deal with it, while ignoring other resources like sockets, pipes, etc. That’s why I like C++, because it solves the problem for ALL resources, not just one. We need a C++ for the real world ;-)

              4. 2

                I answered your question more directly, see response to nickpsecurity.

            1. 6

              i’ve had recent experience: ubuntu 1804 feels sluggish in some parts vs. a slackware install, with more or less the same functionality:

              • xscreensaver unlocking: for some reason it takes half a second after i hit the return key until i see the desktop again on the ubuntu install.

              • there are weird lags everywhere (and no chance of debugging it with all the magick moving parts aka. “*kit”).

              • booting is slower, the initrd takes seconds until i can type in the passphrase to unlock the full disk encryption.

              maybe all the shiny new tools aren’t so great after all. if one looks at the default install size of slackware, it isn’t even lightweight with around ~8G. it just doesn’t have all the stuff handed down to us by redhat, running in the background doing things.

              1. 3

                I would kill for Michael Larabel @ Phoronix to figure out some reliable user responsiveness tests for Linux distros.

                1. 1

                  Because it is very hard and takes a lot of time I’d wager. Few have the time, money or drive to do such a thing.

                  Update: the responsiveness issues I was having with Ubuntu 18.04 were resolved by not using a distro install based on Gnome. I was able to replicate responsiveness improvements w/ a Debian + LXDE install and a Lubuntu (Ubuntu + LXDE) install.

                  I use XMonad as my primary WM anyway so I really just want a leaner bootstrap environment that “just works” anyway.

              1. 13

                I think I understand where the author’s coming from, but I think some of his concerns are probably a bit misplaced. For example, unless you’ve stripped all the Google off your Android phone (which some people can do), Google can muck with whatever on your phone regardless of how you install Signal. In all other cases, I completely get why Moxie would rather insist you install Signal via a mechanism that ensures updates are efficiently and quickly delivered. While he’s got a point on centralized trust (though a note on that in a second), swapping out Google Play for F-Droid doesn’t help there; you’ve simply switched who you trust. And in all cases of installation, you’re trusting Signal at some point. (Or whatever other encryption software you opt to use, for that matter—even if its something built pretty directly on top of libsodium at the end of the day.)

                That all gets back to centralized trust. Unless the author is reading through all the code they’re compiling, they’re trusting some centralized sources—likely whoever built their Android variant and the people who run the F-Droid repositories, at a bare minimum. In that context, I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users. Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers. Chances are honestly higher that you’ll be hacked by some random other app you put on your phone than that Google will opt to go after Signal on their end. Moxie’s point is that you’re better off trusting Signal and Google than some random APK you find on the Internet. And for the overwhelming majority of users, I think he’s entirely correct.

                When I think about something like Signal, I usually focus on, who am I attempting to protect myself from? Maybe a skilled user with GPG is more secure than Signal (although that’s arguable; we’ve had quite a few CVEs this year, such as this one), but normal users struggle to get such a setup meaningfully secure. And if you’re just trying to defend against casual snooping and overexcited law enforcement, you’re honestly really well protected out-of-the-box by what Signal does today—and, as Mickens has noted, you’re not going to successfully protect yourself from a motivated nation-state otherwise.

                1. 20

                  and cause irreparable harm to trust in Google from both users and developers

                  You have good points except this common refrain we should all stop saying. These big companies were caught pulling all kinds of stuff on their users. They usually keep their market share and riches. Google was no different. If this was detected, they’d issue an apologetic press release saying either it was a mistake in their complex, distribution system or that the feature was for police with a warrant with it used accordingly or mistakenly. The situation shifts from everyone ditch evil Google to more complicated one most users won’t take decisive action on. Many wouldn’t even want to think to hard into it or otherwise assume mass spying at government or Google level is going on. It’s something they tolerate.

                  1. 11

                    I think that trusting Google not to want to muck with Signal is probably honestly a safe bet for most users.

                    The problem is that moxie could put things in the app if enough rubberhose (or money, or whatever) is applied. I don’t know why this point is frequently overlooked. These things are so complex that nobody could verify that the app in the store isn’t doing anything fishy. There are enough side-channels. Please stop trusting moxie, not because he has done something wrong, but because it is the right thing to do in this case.

                    Another problem: signals servers could be compromised, leaking the communication metadata of everone. That could be fixed with federation, but many people seem to be against federation here, for spurious reasons. That federation & encryption work together is shown by matrix for example. I give that it is rough on the edges, but at least they try, and for now it looks promising.

                    Finally (imho): good crypto is hard, as the math behind it has hard constraints. Sure, the user interfaces could be better in most cases, but some things can’t be changed without weakening the crypto.

                    1. 2

                      many people seem to be against federation here, for spurious reasons

                      Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there.

                      Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                      1. 2

                        Federation seems like a fast path to ossification.

                        I have been thinking about this. There are certainly many protocols that are unchangeable at this point but I don’t think it has to be this way.

                        Web standards like HTML/CSS/JS and HTTP are still constantly improving despite having thousands of implementations and different programs using them.

                        From what I can see, the key to stopping ossification of a protocol is to have a single authority and source of truth for the protocol. They have to be dedicated to making changes to the protocol and they have to change often.

                        1. 2

                          I think your HTTP example is a good one. I would also add SSL/TLS to that, as another potential useful example to analyze. Both (at some point) had concepts of versioning built into them, which has allowed the implementation to change over time, and cut off the “long tail” non-adopters. You may be on to something with your “single authority” concept too, as both also had (for the most part) relatively centralized committees responsible for their specification.

                          I think html/css/js are /perhaps/ a bit of a different case, because they are more documentation formats, and less “living” communication protocols. The fat clients for these have tended to grow in complexity over time, accreting support for nearly all versions. There are also lots of “frozen” documents that people still may want to view, but which are not going to be updated (archival pages, etc). These have also had a bit more of a “de facto” specification, as companies with dominant browser positions have added their own features (iframe, XMLHttpRequest, etc) which were later taken up by others.

                        2. 1

                          Federation seems like a fast path to ossification. It is much harder to change things without disrupting people if there are tons of random servers and clients out there. Also, remember how great federation worked out for xmpp/jabber when google embraced and then extinguished it? I sure do.

                          It may seem so, but that doesn’t mean it will happen. It has happened with xmpp, but xmpp had other problems, too:

                          • Not good for mobile use (some years back when messenger apps went big, but mobile connections were bad)
                          • A “kind-of-XML”, which was hard to parse (I may be wrong here)
                          • Reinventing of the wheel, I’m not sure how many crypto standards there are for xmpp

                          Matrix does some things better:

                          • Reference server and clients for multiple platforms (electron/web, but at least there is a client for many platforms)
                          • Reference crypto library in C (so bindings are easier and no one tries to re-implement it)
                          • Relatively simple client protocol (less prone to implementation errors than the streams of xmpp, imho)

                          The google problem you described isn’t inherent to federation. It’s more of a people problem: Too many people being too lazy to setup their own instances, just using googles, forming essentially an centralized network again.

                      2. 10

                        Maybe a skilled user with GPG is more secure than Signal

                        Only if that skilled user contacts solely with other skilled users. It’s common for people to plaintext reply quoting the whole encrypted message…

                        1. 3

                          And in all cases of installation, you’re trusting Signal at some point.

                          Read: F-Droid is for open-source software. No trust necessary. Though to be fair, even then the point on centralization still stands.

                          Yes, Google could replace your copy of Signal with a nefarious version for their own purposes, but that’d be amazingly dumb: it’d be quickly detected and cause irreparable harm to trust in Google from both users and developers.

                          What makes you certain it would be detected so quickly?

                          1. 5

                            “Read: F-Droid is for open-source software. No trust necessary”

                            That’s non-sense. FOSS can conceal backdoors if nobody is reviewing it. Often the case. Bug hunters also find piles of vulnerabilities in FOSS just like proprietary. People who vet stuff they use have limits on skill, tools, and time that might make them miss vulnerabilities. Therefore, you absolutely have to trust the people and/or their software even if it’s FOSS.

                            The field of high-assurance security was created partly to address being able to certify (trust) systems written by your worst enemy. They achieved many pieces of that goal but new problems still show up. Almost no FOSS is built that way. So, it sure as hell cant be trusted if you dont trust those making it. Same with proprietary.

                            1. 3

                              It’s not nonsense, it’s just not an assurance. Nothing is. Open source, decentralization, and federation are the best we can get. However, I sense you think we can do better, and I’m curious as to what ideas you might have.

                              1. 4

                                There’s definitely a better method. I wrote it up with roryokane being nice enough to make a better-formatted copy here. Spoiler: none of that shit matters unless the stuff is thoroughly reviewed and proof sent to you by skilled people you can trust. Even if you do that stuff, the core of its security and trustworthiness will still fall on who reviewed it, how, how much, and if they can prove it to you. It comes down to trusting a review process by people you have to trust.

                                In a separate document, I described some specifics that were in high-assurance security certifications. They’d be in a future review process since all of them caught or prevented errors, often different ones. Far as assurance techniques, I summarized decades worth of them here. They were empirically proven to work addressing all kinds of problems.

                            2. 2

                              even then the point on centralization still stands.

                              fdroid actually lets you add custom repo sources.

                              1. 1

                                The argument in favour of F-Droid was twofold, and covered the point about “centralisation.” The author suggested Signal run an F-Droid repo themselves.

                            1. 1

                              Is there any well known PGP alternative other than this? Based from history, I cannot blindly trust code written by one human being and that is not battle tested.

                              In any case, props to them for trying to start something. PGP does need to die.

                              1. 7

                                a while ago i found http://minilock.io/ which sounds interesting as pgp alternative. i don’t have used it myself though.

                                1. 2

                                  Its primitives and an executable model were also formally verified by Galois using their SAW tool. Quite interesting.

                                2. 6

                                  This is mostly a remix, in that the primitives are copied from other software packages. It’s also designed to be run under very boring conditions: running locally on your laptop, encrypting files that you control, in a manual fashion (an attacker can’t submit 2^## plaintexts and observe the results), etc.

                                  Not saying you shouldn’t be ever skeptical about new crypto code, but there is a big difference between this and hobbyist TLS server implementations.

                                  1. 5

                                    I’m Enchive’s author. You’ve very accurately captured the situation. I didn’t write any of the crypto primitives. Those parts are mature, popular implementations taken from elsewhere. Enchive is mostly about gluing those libraries together with a user interface.

                                    I was (and, to some extent, still am) nervous about Enchive’s message construction. Unlike the primitives, it doesn’t come from an external source, and it was the first time I’ve ever designed something like that. It’s easy to screw up. Having learned a lot since then, if I was designing it today, I’d do it differently.

                                    As you pointed out, Enchive only runs in the most boring circumstances. This allows for a large margin of error. I’ve intentionally oriented Enchive around this boring, offline archive encryption.

                                    I’d love if someone smarter and more knowledgeable than me had written a similar tool — e.g. a cleanly implemented, asymmetric archive encryption tool with passphrase-generated keys. I’d just use that instead. But, since that doesn’t exist (as far as I know), I had to do it myself. Plus I’ve become very dissatisfied with the direction GnuPG has taken, and my confidence in it has dropped.

                                    1. 2

                                      I didn’t write any of the crypto primitives

                                      that’s not 100% true, I think you invented the KDF.

                                      1. 1

                                        I did invent the KDF, but it’s nothing more than SHA256 applied over and over on random positions of a large buffer, not really a new primitive.

                                  2. 6

                                    Keybase? Kinda?…

                                    1. 4

                                      It always bothers me when I see the update say it needs over 80 megabytes for something doing crypto. Maybe no problems will show up that leak keys or cause a compromise. That’s a lot of binary, though. I wasn’t giving it my main keypair either. So, I still use GPG to encrypt/decrypt text or zip files I send over untrusted mediums. I use Keybase mostly for extra verification of other people and/or its chat feature.

                                    2. 2

                                      Something based on nacl/libsodium, in a similar vein to signify, would be pretty nice. asignify does apparently use asymmetric encryption via cryptobox, but I believe it is also written/maintained by one person currently.

                                      1. 1

                                        https://github.com/stealth/opmsg is a possible alternative.

                                        Then there was Tedu’s reop experiment: https://www.tedunangst.com/flak/post/reop

                                      1. 11

                                        why to people have the need to use a framework for everything, like the BDD testing frameworks in this article. i really don’t see the value of it. it’s just another dependency to carry around, and i can’t just read and understand what is happening.

                                        what is gained by writing:

                                        Expect(resp.StatusCode).To(Equal(http.StatusOK))
                                        

                                        instead of

                                        if resp.StatusCode != http.StatusOK { 
                                            t.Fail() 
                                        }
                                        
                                        1. 11

                                          I don’t use that particular testing framework, but the thing I’d expect to gain by using it is better test failure messages. I use testify at work for almost precisely that reason. require.Equal(t, valueA, valueB) provides a lot of value, for example. I tried not to use any additional test helpers in the beginning, probably because we have similar sensibilities. But writing good tests that also have good messages when they fail got pretty old pretty fast.

                                          1. 3

                                            ok, i can see that good messages may help, though i’d still rather use t.Fatal/Fatalf/Error/Errorf, maybe paired with a custom type implementing error (admitting that it’s a bit more to type) if a custom DSL is the alternative :)

                                            testify looks interesting though!

                                            1. 4

                                              testify is nice because it isn’t really a framework, unless maybe you start using its “suite” functionality, which is admittedly pretty light weight. But the rest of the library drops right into the normal Go unit testing harness, which I like.

                                              I did try your methods for a while, but it was just untenable. I eventually just stopped writing good failure messages, which I just regretted later when trying to debug test failures. :-)

                                              testify is a nice middle ground that doesn’t force you to play by their rules, but adds a lot of nice conveniences.

                                          2. 6

                                            The former probably gives a much better failure message (e.g. something like “expected value ‘200’ but got value ‘500’”, rather than “assertion failed”).

                                            That’s obviously not inherent to the complicated testing DSL, though. In general, I’m a fan of more expressive assert statements that can give better indications of what went wrong; I’m not a big fan of heavyweight testing frameworks or assertion DSLs because, like you, I generally find they badly obfuscate what’s actually going on in the test code.

                                            1. 4

                                              yeah, with the caveats listed by others, I sort of thing this is a particularly egregious example of strange library usage/design. in theory, anyone (read: not just engineers) is supposed to be able to write a BDD spec. However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…

                                              1. 3

                                                However, for that to be possible, it should be written in natural language. Behat specs are a good example of this: http://behat.org/en/latest/. But this one is just a DSL, which misses the point I think…

                                                I’d say that the thing behat does is a real DSL (like, with a parser and stuff). The library from the article just has fancy named functions which are a bit a black box to me.

                                                Just a thought: One could maybe write a compiler for a behat-like language which generates stdlib Go-tests, using type information found in the tested package, instead of using interface{} and reflect. That’d be a bit of work though ;)

                                            1. 5

                                              this full-throttle tinfoily panic mode of some people right now. “move to hosted gitlab!!1 that will show ‘em!!11”. i’m not anti-gitlab, but hosted gitlab has the same set of problems like github. like, for example, being bought by $EVILCOMPANY

                                              if microsoft now decides there will be no more free repos, it’s ok! they can do with their property however they please (just like before that acquisition github could’ve done). don’t bitch about the free lunch not tasting right. that is the deal if you use resources of others for free.

                                              1. 3

                                                I think for most people, if gitlab took a similar turn, a self-hosted (or pay someone else to host it) OSS version of GitLab would be fine.

                                                People use gitlab.com because it’s hands-off, not because it’s the commercial version for free.

                                                1. 3

                                                  It’s not “that will show em” at all. No idea where that is being quoted from.
                                                  I can say my statement was, IF the MS acquisition bothered you, and there is enough historical precedent that it may reasonably do so for reasonable people, then note that Gitlab does currently have 1-click repository migration from GitHub. In addition that is is also a possibility that Github may unilaterally sever that capability IF the migration becomes a flood. Ergo if you are going to do it, then do so now and don’t wait.

                                                  1. 1

                                                    it was a purposely overstated made-up-quote (easily spotted by the liberal use of “!!111”).

                                                    microsoft is an actor on the market and as a result does things to maximize profits. one only has to take that in account when choosing to use their services. i’m not overly happy with it either, but gitlab is an actor too and plays by the same rules, including the possibility of being acquired. just self host, it’s not even hard, scaleway has prepared images for that for example.

                                                    regarding the importing functionality: if they break the mechanisms to do that, i guess many other things won’t work as well, like bots acting on issues, etc. i don’t think they will break the whole ecosystem, as effectively that’s what they’ve paid for. maybe they’ll do that in the extended future, like twitter breaking their api for clients.

                                                  2. 2

                                                    Imagine what would happen when MSFT after buying GH also gets travisCi , which i believe they will do :)

                                                    1. 2

                                                      It should also be quite a bit cheaper, afaik they never took VC money.

                                                  1. 12

                                                    Output should be simple to parse and compose

                                                    No JSON, please.

                                                    Yes, every tool should have a custom format that needs a badly cobbled together parser (in awk or whatever) that will break once the format is changed slighly or the output accidentally contains a space. No, jq doesn’t exist, can’t be fitted into Unix pipelines and we will be stuck with sed and awk until the end of times, occasionally trying to solve the worst failures with find -print0 and xargs -0.

                                                    1. 11

                                                      JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                                                      In a JSON shell tool world you will have to spend time parsing and re-arranging JSON data between tools; as well as constructing it manually as inputs. I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).


                                                      Sidestory: several months back I had a co-worker who wanted me to make some code that parsed his data stream and did something with it (I think it was plotting related IIRC).

                                                      Me: “Could I have these numbers in one-record-per-row plaintext format please?”

                                                      Co: “Can I send them to you in JSON instead?”

                                                      Me: “Sure. What will be the format inside the JSON?”

                                                      Co: “…. it’ll just be JSON.”

                                                      Me: “But it what form? Will there be a list? Name of the elements inside it?”

                                                      Co: “…”

                                                      Me: “Can you write me an example JSON message and send it to me, that might be easier.”

                                                      Co: “Why do you need that, it’ll be in JSON?”

                                                      Grrr :P


                                                      Anyway, JSON is a format, but you still need a format inside this format. Element names, overall structures. Using JSON does not make every tool use the same format, that’s strictly impossible. One tool’s stage1.input-file is different to another tool’s output-file.[5].filename; especially if those tools are for different tasks.

                                                      1. 3

                                                        I think that would end up being just as hacky as the horrid stuff we do today (let’s not mention IFS and quoting abuse :D).

                                                        Except that standardized, popular formats like JSON get the side effect of tool ecosystems to solve most problems they can bring. Autogenerators, transformers, and so on come with this if it’s a data format. We usually don’t get this if it’s random people creating formats for their own use. We have to fully customize the part handling the format rather than adapt an existing one.

                                                        1. 2

                                                          Still, even XML that had the best tooling I have used so far for a general purpose format (XSLT and XSD in primis), was unable to handle partial results.

                                                          The issue is probably due to their history, as a representation of a complete document / data structure.

                                                          Even s-expressions (the simplest format of the family) have the same issue.

                                                          Now we should also note that pipelines can be created on the fly, even from binary data manipulations. So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.

                                                          1. 2

                                                            “Still, even XML”

                                                            XML and its ecosystem were extremely complex. I used s-expressions with partial results in the past. You just have to structure the data to make it easy to get a piece at a time. I can’t recall the details right now. Another I used trying to balance efficiency, flexibility, and complexity was XDR. Too bad it didn’t get more attention.

                                                            “So a single dictated format would probably pose too restrictions, if you want the system to actually enforce and validate it.”

                                                            The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated. Works well enough for them. Camkes is an example.

                                                            1. 3

                                                              XML and its ecosystem were extremely complex.

                                                              It is coherent, powerful and flexible.

                                                              One might argue that it’s too flexible or too powerful, so that you can solve any of the problems it solves with simpler custom languages. And I would agree to a large extent.

                                                              But, for example, XHTML was a perfect use case. Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                                                              The L4 family usually handles that by standardizing on an interface, description language with all of it auto-generated.

                                                              Yes but they generate OS modules that are composed at build time.

                                                              Pipelines are integrated on the fly.

                                                              I really like strongly typed and standard formats but the tradeoff here is about composability.

                                                              UNIX turned every communication into byte streams.

                                                              Bytes byte at times, but they are standard, after all! Their interpretation is not, but that’s what provides the flexibility.

                                                              1. 4

                                                                Indeed to do what I did back then with XLST now people use Javascript, which is less coherent and way more powerful, and in no way simpler.

                                                                While I am definitely not a proponent of JavaScript, computations in XSLT are incredibly verbose and convoluted, mainly because XSLT for some reason needs to be XML and XML is just a poor syntax for actual programming.

                                                                That and the fact that while my transformations worked fine with xsltproc but did just nothing in browsers without any decent way to debug the problem made me put away XSLT as an esolang — lot of fun for an afternoon, not what I would use to actually get things done.

                                                                That said, I’d take XML output from Unix tools and some kind of jq-like processor any day over manually parsing text out of byte streams.

                                                                1. 2

                                                                  I loved it when I did HTML wanting something more flexible that machines could handle. XHTML was my use case as well. Once I was a better programmer, I realized it was probably an overkill standard that could’ve been something simpler with a series of tools each doing their little job. Maybe even different formats for different kinds of things. W3C ended up creating a bunch of those anyway.

                                                                  “Pipelines are integrated on the fly.”

                                                                  Maybe put it in the OS like a JIT. Far as bytestreams, that mostly what XDR did. They were just minimally-structured, byte streams. Just tie the data types, layouts, and so on to whatever language the OS or platform uses the most.

                                                          2. 3

                                                            JSON replaces these problems with different ones. Different tools will use different constructs inside JSON (named lists, unnamed ones, different layouts and nesting strategies).

                                                            This is true, but but it does not mean heaving some kind of common interchange format does not improve things. So yes, it does not tell you what the data will contain (but “custom text format, possibly tab separated” is, again, not better). I know the problem, since I often work with JSON that contains or misses things. But the problem is not to not use JSON but rather have specifications. JSON has a number of possible schema formats which puts it at a big advantage of most custom formats.

                                                            The other alternative is of course something like ProtoBuf, because it forces the use of proto files, which is at least some kind of specification. That throws away the human readability, which I didn’t want to suggest to a Unix crowd.

                                                            Thinking about it, an established binary interchange format with schemas and a transport is in some ways reminiscent of COM & CORBA in the nineties.

                                                          3. 7

                                                            will break once the format is changed slighly

                                                            Doesn’t this happens with json too?
                                                            A slight change in the key names or turning a string to a listof strings and the recipient won’t be able to handle the input anyway.

                                                            the output accidentally contains a space.

                                                            Or the output accidentally contact a comma: depending on the parser, the behaviour will change.

                                                            No, jq doesn’t exis…

                                                            Jq is great, but I would not say JSON should be the default output when you want composable programs.

                                                            For example JSON root is always a whole object and this won’t work for streams that get produced slowly.

                                                            1. 5

                                                              will break once the format is changed slighly

                                                              Doesn’t this happens with json too?

                                                              Using a whitespace separated table such as suggested in the article is somewhat vulnerable to continuing to appear to work after the format has changed while actually misinterpreting the data (e.g. if you inserted a new column at the beginning, your pipeline could happily continue, since all it needs is at least two columns with numbers in). Json is more likely to either continue working correctly and ignore the new column or fail with an error. Arguably it is the key-value aspect that’s helpful here, not specifically json. As you point out, there are other issues with using json in a pipeline.

                                                            2. 3

                                                              On the other hand, most Unix tools use tabular format or key value format. I do agree though that the lack of guidelines makes it annoying to compose.

                                                              1. 2

                                                                Hands up everybody that has to write parsers for zpool status and its load-bearing whitespaces to do ZFS health monitoring.

                                                                1. 2

                                                                  In my day-to-day work, there are times when I wish some tools would produce JSON and other times when I wish a JSON output was just textual (as recommended in the article). Ideally, tools should be able to produce different kinds of outputs, and I find libxo (mentioned by @apy) very interesting.

                                                                  1. 2

                                                                    I spent very little time thinking about this after reading your comment and wonder how, for example, the core utils would look like if they accepted/returned JSON as well as plain text.

                                                                    A priori we have this awful problem of making everyone understand every one else’s input and output schemas, but that might not be necessary. For any tool that expects a file as input, we make it accept any JSON object that contains the key-value pair "file": "something". For tools that expect multiple files, have them take an array of such objects. Tools that return files, like ls for example, can then return whatever they want in their JSON objects, as long as those objects contain "file": "something". Then we should get to keep chaining pipes of stuff together without having to write ungodly amounts jq between them.

                                                                    I have no idea how much people have tried doing this or anything similar. Is there prior art?

                                                                    1. 9

                                                                      In FreeBSD we have libxo which a lot of the CLI programs are getting support for. This lets the program print its output and it can be translated to JSON, HTML, or other output forms automatically. So that would allow people to experiment with various formats (although it doesn’t handle reading in the output).

                                                                      But as @Shamar points out, one problem with JSON is that you need to parse the whole thing before you can do much with it. One can hack around it but then they are kind of abusing JSON.

                                                                      1. 2

                                                                        That looks like a fantastic tool, thanks for writing about it. Is there a concerted effort in FreeBSD (or other communities) to use libxo more?

                                                                        1. 1

                                                                          FreeBSD definitely has a concerted effort to use it, I’m not sure about elsewhere. For a simple example, you can check out wc:

                                                                          apy@bsdell ~> wc -l --libxo=dtrt dmesg.log
                                                                               238 dmesg.log
                                                                          apy@bsdell ~> wc -l --libxo=json dmesg.log
                                                                          {"wc": {"file": [{"lines":238,"filename":"dmesg.log"}]}
                                                                          }
                                                                          
                                                                    2. 1

                                                                      powershell uses objects for its pipelines, i think it even runs on linux nowaday.

                                                                      i like json, but for shell pipelining it’s not ideal:

                                                                      • the unstructured nature of the classic output is a core feature. you can easily mangle it in ways the programs author never assumed, and that makes it powerful.

                                                                      • with line based records you can parse incomplete (as in the process is not finished) data more easily. you just have to split after a newline. with json, technically you can’t begin using the data until a (sub)object is completely parsed. using half-parsed objects seems not so wise.

                                                                      • if you output json, you probably have to keep the structure of the object tree which you generated in memory, like “currently i’m in a list in an object in a list”. thats not ideal sometimes (one doesn’t have to use real serialization all the time, but it’s nicer than to just print the correct tokens at the right places).

                                                                      • json is “java script object notation”. not everything is ideally represented as an object. thats why relational databases are still in use.

                                                                      edit: be nicer ;)

                                                                    1. 15

                                                                      world-open QA-less package ecosystems (NPM, go get)

                                                                      This is one I’m increasingly grumpy about. I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

                                                                      1. 4

                                                                        world-open QA-less package ecosystems (NPM, go get)

                                                                        i’d argue that go get is no package ecosystem. it’s just a (historic) convenience tool, which was good enough for the initial use (inside a organization). furthermore, i like the approach better than the centralized language package systems. nobody checks all the packages in pypi or rubygems. using a known good git repo isn’t worse, maybe it’s even better as there is not another link in the chain which could break, as the original repository is used instead of a somehow packaged copy.

                                                                        I wish more ecosystems would establish a gold set of packages that have complete test coverage, complete API documentation, and proper semantic versioning.

                                                                        python has the batteries included since ages, gos standard library isn’t bad either. both are well-tested and have good documentation. in my opinion the problem is that often another 3rd pary depencendy gets quickly pulled in, instead of giving a second thought to if it is really required or can be done by oneself which may spare one trouble in the future (e.g. left-pad).

                                                                        in some cases there is even a bit of quality control for non standard packages: some database drivers for go are externally tested: https://github.com/golang/go/wiki/SQLDrivers

                                                                        1. 2

                                                                          Then you get the curation (and censorship) of Google Play or Apple’s Store.

                                                                          Maybe you want more of the Linux package repo model where you have the official repo (Debian, RedHat, Gentoo Portage), some optional non-oss or slightly less official repos (Fedora EPEL) and then always having the option to add 3rd party vendor repos with their own signing keys (PPA, opensuse build service, Gentoo Portage overlays).

                                                                          I really wish Google Play had the option of adding other package trees. I feel like Apple and Google took a great concept and totally fucked it up. Ubuntu Snap is going in the same (wrong) direction.

                                                                          1. 2

                                                                            On Android it’s certainly possible to install F-Droid, and get access to an alternate package management ecosystem. I think I had to sideload the F-Droid APK to get it to work though, which not every user would know how to do easily (I just checked, it doesn’t seem to be available in the play store).

                                                                        1. 1

                                                                          There is a typo in the title (decontructing -> deconstructing).

                                                                          1. 2

                                                                            it’s in the article too, so i’d keep it that way here

                                                                          1. 13

                                                                            I have other things going on in the pixel mines, but a couple parts of this I don’t think illustrate the points the author wants to make.

                                                                            But this criticism largely misses the point. It might be nice to have very small and simple utilities, but once you’ve released them to the public, they’ve become system boundaries and now you can’t change them in backwards-incompatible ways.

                                                                            This is not an argument for making larger tools–is it better to have large and weird complicated system boundaries you can’t change, or small ones you can’t change?

                                                                            While Plan 9 can claim some kind of ideological purity because it used a /net file system to expose the network to applications, we’re perfectly capable of accomplishing some of the same things with netcat on any POSIX system today. It’s not as critical to making the shell useful.

                                                                            This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                                                            The author might as well have something similar about the “ideological purity of using virtual memory”, since some of the same things can be accomplished with cooperative multitasking!

                                                                            1. 4

                                                                              This is a gross oversimplification and glossing over of what Plan 9 enabled. It wasn’t mere “ideological purity”, but a comprehensive philosophy that enabled an environment with neat tricks.

                                                                              Not only tricks, but a whole concept of how ressources can be used: Use the file storage on one system, the input/output (screen, mouse, etc.) of another and run the programs somewhere with a strong cpu, all by composing filesystems. Meanwhile in 2018 we are stuck with ugly hacks and different protocols for everything, trying to fix problems by adding another layer on top of things (e.g. pulseaudio on top of alsa).

                                                                              And, from the article:

                                                                              And as a filesystem, you start having issues if you need to make atomic, transactional changes to multiple files at once. Good luck.

                                                                              Thats an issue of designing the concrete filesystem, not of the filesystem-abstraction. You could write settings to a bunch of files which are together in a dictionary and commit them with a write to a single control file.

                                                                              Going beyond text streams

                                                                              PowerShell is a good solution, but the problem we have with pipelines on current unix-style systems isn’t that the data is text, but that the text is ill formatted. Many things return some cute markup. That makes it more difficult to parse than necessary.

                                                                              1. 3

                                                                                Actually Unix proposed the file as an universal interface before Plan 9 was a dream.
                                                                                The issue was that that temporary convenience and the hope that “worse is better” put Unix in a local minimum were that interface was not universal at all (sockets, ioctl, fctl, signals…).
                                                                                Pike tried to escape such minimum with Plan 9, where almost every kernel and user service is provided as a filesystem and you can stack filesystems like you compose pipes in Unix.

                                                                                1. 10

                                                                                  Put a quarter in the Plan9 file-vs-filesystem “well actually” jar ;)

                                                                                1. 2

                                                                                  go sometimes has it in its docs, for example: https://golang.org/pkg/sort/#Sort

                                                                                  1. 3

                                                                                    I hadn’t seen that. I know that Russ Cox is pretty algorithmically inclined - his analysis of the algortithms for the new dependency versioning mechanism is really thoughtful, but usually I see it as comment in the implemetation (which is usually more appropriate).

                                                                                    1. 2

                                                                                      thats one thing i like about go, they value the research that has been done and try to implement the best solution (within sane limits).

                                                                                  1. 5

                                                                                    2 cents: this is the reason why federated protocols make more sense, instead of centralizing, but moxie is against federation.

                                                                                    the infrastructure should be owned by the users.

                                                                                    i never quite got why signal is so hyped, you essentially just choose to trust them and not whatsapp/telegram/whatever with your metadata.

                                                                                    1. 3

                                                                                      There’s always going to be a question of trust, and OWS is more independent than your examples. If something is federated and as secure and trustworthy, you got to have easy-to-use clients and trust in maintainership of the servers and the code base.

                                                                                      1. 3

                                                                                        While signal is open source, what should keep them from not deploying that version to their servers, but a slightly modified? Even if that’s not a problem with the chats being e2e encrypted, why should i trust them with the metadata? With federation I (or a party I know and trust) can run a server, and I am still able to talk to people on other servers (the other party has to be trusted with metadata too, but thats inherent to the problem).

                                                                                        I just don’t like the OWS cult. The classic advise “use signal and everythings gonna be fine” is misled. OWS is a single point of failure. People have to learn how technology works. Not the gory crypto details, but at least the 10000ft view. They use cloud resources. I’d expect that there are some parties that are more than interested in access to those servers. I know that this sounds a bit tin-foil-heady, but with the risk profile of signal, the first thing I’d do would be having my own infrastructure I can control. It’s just a compromise which doesn’t match the whole secure communitations idea.

                                                                                        Imagine: Someone other than OWS has access to the cloudy servers and deploys a version of signal server which exploits a flaw in the signal client, maybe a protocol parsing bug. I don’t know how good the client sanitizes the communication with the server, but I’ll guess the expectation is that the server is well behaving. Bingo, possibly all clients are pwnd. With federated services this seems to be much harder, as a) other parties should always expect malign behavior in such protocols b) just the clients of this one instance are affected. Other servers are probably running a different OS, with a different setup, in different countrys, which makes attacking every server much more complicated.

                                                                                        edit: fix b0rken english

                                                                                    1. 4

                                                                                      Is there anyone who can review a distro without reviewing some desktop manager?

                                                                                      Is there anyone who understands that desktop managers are independent of distros?

                                                                                      1. 5

                                                                                        distros are mostly the same under the hood, linux, systemd and deb/rpm packages.

                                                                                        the interesting parts are things like “will it destroy itself during distro upgrades” but those are rarely included in reviews

                                                                                      1. 21

                                                                                        I detest paying for software except when it occupies certain specialized cases or represents something more akin to work of art, such as the video game Portal.

                                                                                        I detest this attitude. He probably also uses an ad blocker and complains about how companies sell his personal information. You can’t be an open source advocate if you detest supporting the engineers that build open source software.

                                                                                        But only when it’s on sale.

                                                                                        I’m literally disgusted.

                                                                                        1. 8

                                                                                          It’s reasonable to disagree with the quote about paying for software. But how on earth does this defense of the advertising industry come in?

                                                                                          Certainly it’s possible to be an open source advocate and use an ad blocker and oppose the selling of personal information.

                                                                                          1. 2

                                                                                            Certainly. Actually, I would describe myself in that way. But you can’t believe that, and also believe you’re entitled to free-as-in-beer software. Especially high quality “just works” software the author describes. It’s a contradiction.

                                                                                            Alternative revenue streams like advertising exist to power products people won’t pay for. I don’t know many software engineers that want to put advertising in their products, rather they have to in order to avoid losing money. That’s why I happily pay for quality software like Dash and Working Copy, and donate to open source projects.

                                                                                            1. 1

                                                                                              But you can’t believe that, and also believe you’re entitled to free-as-in-beer software.

                                                                                              I don’t get that sort of vibe from this article. He doesn’t seem to be entitled at all.

                                                                                          2. 4

                                                                                            “free as in free beer”!

                                                                                            1. 1

                                                                                              I can’t afford to have a different attitude.

                                                                                            1. 5

                                                                                              They claim that the gopher is still there but I didn’t see it anywhere…

                                                                                              https://mobile.twitter.com/golang/status/989622490719838210

                                                                                              Rest easy, our beloved Gopher Mascot remains at the center of our brand.”

                                                                                              and why on earth is this downvoted off topic?

                                                                                              1. 4

                                                                                                https://twitter.com/rob_pike/status/989930843433979904

                                                                                                Rob Pike seconding this.

                                                                                                Also, this is pretty relevant because when people think “golang logo” they typically think of the gopher. I’m not sure people even realized there was a hand drawn golang text logo before this announcement.

                                                                                                1. 3

                                                                                                  It had two speed lines. Go got faster, so they added a 3rd. Presumably, there’s room for more speed lines as Go’s speed improves.

                                                                                              1. 3
                                                                                                1. The relative difficulty of running your own as an absolute beginner

                                                                                                yes, it’s difficult. that’s because one has to know how things work to make them work. we have to get away from this “computers are easy!” thing. they aren’t, and everything that pretends to be easy has a trade-off (privacy seems to be the current thing). analogy: cars (even bikes) are seldom built by their owners, the trade-off for them being easy and comfortable to use are high repair costs, as things are more complex. technology isn’t easy, even if everybody wants one to believe that.

                                                                                                1. The eventual centralization on top of the most well-run versions (like Matrix)

                                                                                                there will always be bigger instances. reasons for running an own instance are either that you find it interesting or that you don’t trust any of the existing ones (or both). one doesn’t have to federate just because it’s possible. the important thing is that it is possible.

                                                                                                1. 1

                                                                                                  i’ve always wondered if one could use gpus to speed up prolog?

                                                                                                  1. 4

                                                                                                    The way prolog is written in practice tends to lean pretty heavily on the order in which things are evaluated – you make sure predicates fail fast, and to do that, you take advantage of the knowledge that the order in which predicates are defined is the order in which they are evaluated (and then you use cuts to prevent other paths from being evaluated at all). A lot of real code would fail if it got implicitly parallelized in any way. (This is one of the reasons I didn’t want to keep compatibility.)

                                                                                                    It’s pretty trivial to make a prolog-like language that implicitly parallelizes most back-tracking. (In fact, that’s most of what this is.) But, when used naturally, it would cease to have the same kind of operation ordering guarantees. (You could backtrack on the results after parallel execution, in order to figure out which path should have run first, but there aren’t enough prolog programmers to justify keeping compatibility IMO.)

                                                                                                    I’m not sure GPUs would be more beneficial than having extra conventional CPU threads, since what we’re doing is basically a search. However, maybe there’s some way to model a tree search as matrix multiplication at compile time or something. (I don’t really have enough of a math background to say. They managed to do it with neural nets, so I guess anything’s possible.)

                                                                                                    1. 1

                                                                                                      thanks for the reply! i don’t really now much about prolog, but when i last was doing some cuda stuff i thought about this. i didn’t know that the evaluation order is used that much in practice.

                                                                                                      maybe tree searches could be somewhat parallelized when stored like a closure table for sql, but that’s a wild uneducated guess :)