1. 1

      Quite a few people here demanded evidence that PC or radical leftists were doing some kind of coordinated work to suppress others’ views on forums, in media, journals, and so on. Count the first link as a nice example. A handful of people in secret and on Facebook misrepresenting a scientists work got everyone from journals to NSF to cancel support over concern of damaged image, losing people in professional network, and/or people getting fired. The scientific, free-speech alternative is that the paper is published followed by such scientists either corroborating or refuting it with evidence. They have no interest in that: political domination is more important facts or due process.

      1. 2

        That’s pretty poor evidence for your idea. If you bother to actually read the links you will find that the board of the NYJM raised questions about the lack of proper review of the paper and the quality of the mathematics

        This statement is meant to set the record straight on the unfounded accusations of Ted Hill regarding his submission to the New York Journal of Mathematics (NYJM), where I was one of 24 editors serving under an editor-in-chief. Hill’s paper raised several red flags to me and other editors, giving concern not just about the quality of the paper, but also the question of whether it underwent the usual rigorous review process. Hill’s paper also looked totally inappropriate for this theoretical math journal: in addition to the paucity of math in the paper, its subject classification (given by the authors themselves) appeared in no other paper in NYJM’s 24 year history, and did not fall into any of the areas of expertise of the editors of NYJM, as listed on the NYJM website.

        and

        or whatever reason, some of the discussion online has focused on the role of Amie Wilkinson, a mathematician from the University of Chicago (and who, incidentally, was a recent speaker here at UCLA in our Distinguished Lecture Series), who wrote an email to the editor-in-chief of the Intelligencer raising some concerns about the content of the paper and suggesting that it be published alongside commentary from other experts in the field.

        In summary, a trashy paper first slipped through peer review and then was caught and dumped by the editors. The real question is who wanted to publish this garbage in the first place and why? This is an example of the success of peer review and academic standards, not an example of a shadowy cabal suppressing free speech.

        1. 1

          You’re saying the regular, editorial process involves people in Women’s Chapters asking for stuff to be canceled for being inherently damaging, secret converations behind the scenes, the submitters being misled about the who/what of the review process, smear/shaming campaigns of organizations on Facebook, a “round table” where one side gets 15 min but others get prepared speeches, and replacing the article with another one entirely as shaming continues?

          Is that all normal for a scientific, review process of a paper on biology or mathematics? I’ve never heard of it. I have heard of bickering with reviewers before submission. They’d already passed that point, though. This all happened after that through what looks to be entirely politics, not science. Of course, this assumes the stuff even happened in the event the author is misleading us. That would be easy to counter, though, by simply following the scientific process of publishing and debating the paper which already had some peer review. Then, the politics would be minimal in the process itself.

          1. 3

            No. Actually, what is described is the regular process. The irregular process at the NYJM was the initial acceptance of a political article, not within the scope of the magazine, outside the regular editorial process, without review. That was corrected when editorial board members expressed their concerns. It is very clear that the article never passed the regular review process at NYJM. And the regular process does not involve publishing every submitted paper and then arguing about it. The initial accept came out of an irregular process, ideologically motivated, to place an unsuitable article in a mathematical journal outside of peer review. Correcting that is a win for academic standards. The Terry Tao and Benson Farb and Amy Wilkinson statements are insights into the scientific process. Quillete is a house of nonsense.

            This statement addresses some unfounded allegations about my personal involvement with the publishing of Ted Hill’s preprint “An evolutionary theory for the variability hypothesis” (and the earlier version of this paper co-authored with Sergei Tabachnikov). As a number of erroneous statements have been made, I think it’s important to state formally what transpired and my beliefs overall about academic freedom and integrity.

            I first saw the publicly-available paper of Hill and Tabachnikov on 9/6/17, listed to appear in The Mathematical Intelligencer. While the original link has been taken down, the version of the paper that was publicly available on the arxiv at that time is here.

            I sent an email, on 9/7/17, to the Editor-in-Chief of The Mathematical Intelligencer, about the paper of Hill and Tabachnikov. In it, I criticized the scientific merits of the paper and the decision to accept it for publication, but I never made the suggestion that the decision to publish it be reversed. Instead, I suggested that the journal publish a response rebuttal article by experts in the field to accompany the article. One day later, on 9/8/17, the editor wrote to me that she had decided not to publish the paper.

            I had no involvement in any editorial decisions concerning Hill’s revised version of this paper in The New York Journal of Mathematics. Any indications or commentary otherwise are completely unfounded.

            I would like to make clear my own views on academic freedom and the integrity of the editorial process. I believe that discussion of scientific merits of research should never be stifled. This is consistent with my original suggestion to bring in outside experts to rebut the Hill-Tabachnikov paper. Invoking purely mathematical arguments to explain scientific phenomena without serious engagement with science and data is an offense against both mathematics and science.

            Amie Wilkinson Professor of Mathematics University of Chicago September 11, 2018

            1. 2

              Just want to add that Professor Wilkinson’s final point is an excellent one. In social science and econ, and in CS for that matter, there are far too many papers showing that, with sufficient assumptions, some conjecture about how something works can be captured in a mathematical model. Without at least some indication that the model then actually casts light on the science, this is usually just a pointless exercise or worse.

              1. 1

                Have you read the article? Mathematical models may be seen as political when they suggest conclusions that go against somebodies belief about how reality should be.

                1. 2

                  I read the assessment of one of the NYJM editors. It seems totally dispositive.

                  This statement is meant to set the record straight on the unfounded accusations of Ted Hill regarding his submission to the New York Journal of Mathematics (NYJM), where I was one of 24 editors serving under an editor-in-chief. Hill’s paper raised several red flags to me and other editors, giving concern not just about the quality of the paper, but also the question of whether it underwent the usual rigorous review process. Hill’s paper also looked totally inappropriate for this theoretical math journal: in addition to the paucity of math in the paper, its subject classification (given by the authors themselves) appeared in no other paper in NYJM’s 24 year history, and did not fall into any of the areas of expertise of the editors of NYJM, as listed on the NYJM website.

                  I don’t know what groundbreaking conclusions you think can be drawn from the paper but if you read the NYJM table of contents, you can see it was a bizarre idea to even consider this paper for publication. It’s, at best, a social science modeling paper - see http://nyjm.albany.edu/currvol.htm - for a journal that is more at home with cohomology and Banach spaces.

                  1. 1

                    Forming an opinion based on the point of view of one side of a discussion does have the advantage of not having to think too hard about an issue.

                    1. 2

                      I can check some of it. I looked at the article, it is obviously not suited to that journal. If you want to consider it likely that NYJM editorial board is a feminist conspiracy, be my guest, but Benson Farb is a well regarded mathematician, and as far as I know, nobody with inside knowledge has challenged his completely believable account. On the other hand, Quillete is a grossly unreliable source.

        1. 6

          This paper is flogging a dead horse. There are plenty of corner cases to be tweaked, but they don’t add up to much.

          There are bigger improvements to be had by thinking bigger

          1. 2

            Your thinking bigger article is interesting. Morton encoding is cool.

            1. 2

              Interesting! Were there any programming languages which experimented with/used the Morton encoding for their arrays?

              1. 1

                It’s an implementation technique. If an array is only ever indexed (i.e., no pointers into it) the compiler can use whatever layout it chooses.

                1. 1

                  I don’t see how a compiler could deduce usage patterns that would benefit from morton indexing.

              2. 1

                “There are really really big savings to be had by providing compilers with a means of controlling the processors caches, e.g., instructions to load and flush cache lines. “

                It’s true. It’s also field-proven: they’re called scratchpads. They use less circuitry and power since they’re simple, software-driven stores. However, they have to be used wisely by the compiler. Most of what market makes isn’t. So, those pushing caches over scratchpads got more sales. The scratchpads are mainly in embedded products now IIRC.

              1. 6

                For those of you who list books on Goodreads, there is now a lobste.rs Goodreads group.

                If one of the lobste.rs admins want to be an admin of the Goodreads group, please let me know.

                  1. 2

                    Derek! This post is super interesting and I hadn’t seen it until now. I’d worked on a similar idea about a year ago with a bitcoin-paying npm proxy server. The idea was basically the same: package developers could include payment information in the metadata and folks that use the projects would automatically payout to those projects.

                    Although, I think OpenCollective’s BackYourStack has done a better job at creating a user-friendly system (centralized, over the traditional payment system).

                    I’m not sure this fulfills OPs criteria for a compelling use case, but it’s great to encounter someone working on similar ideas.

                    1. 2

                      At the time there was push back from blockchain people complaining about blocks being filled up unnecessarily. These days Etherium might be a better, if somewhat more complicated solution.

                      Traditional payment systems are designed o be confidential, which for this use case is a disadvantage.

                    2. 2

                      The argument seems to be “PayPal doesn’t have an API for this.” So the issue isn’t the centralized system, it’s just a missing API, and if they had it PayPal would suffice?

                      1. 2

                        The point of the blockchain approach is that proof of purchase is publicly visible; removing the need for the software developer to spend time any time confirming the sale.

                        1. 10

                          You dont need a blockchain. You just need a log, crypto, and 3rd-party checking. Schemes for “blockchain” functionality that just used logs with crypto have been around a long time.

                          1. 5

                            More specifically I guess, Certificate Transparency. Every time someone wants a “blockchain” to publicly prove something, they actually want CT.

                            1. 1

                              I think this is correct. Although it’s very hard to trust Google on this specific instance.

                              1. 3

                                You don’t have to trust Google for anything. I mean, you can adapt the general scheme/protocol for any content (not just TLS certs) and trust whoever you want to host servers.

                              2. 1

                                That’s another good example of logging + crypto + checking.

                                1. 1

                                  A CT is half the solution. A blockchain performs payment and public record keeping in one transaction.

                                  1. 2

                                    It does but it’s unnecessary. Fire off two transactions: one updates a key-value store that audit pages are generated from; one goes through payment system. Both are so efficient that similar protocol operations are done on 16-bit MCU’s in smartcards.

                                    It’s also not clear that you want the payment and log handled by same systems with same privileges and admins. Splitting them up can mitigate some risk.

                        1. 6

                          Electricity usage is a huge concern even within the cryptocurrency community. There is a lot of work going towards more energy efficient solutions. However, proof-of-work is still the defacto method. At Merit we still use PoW but I chose a memory-bound algorithm called Cuckoo Cycle which is more energy efficient since it’s memory bandwidth bound. I hope to move away from proof-of-work completely in the future, but it’s not easy to get the same properties. Since in some ways, Merit is half PoW and PoS (Proof-of-Stake) via our Proof-of-Growth (PoG) algorithm, we are already halfway there.

                          Proof-of-Work is fascinating because it’s philosophically the opposite of Fiat money. Fiat money is one of the few things in the world where you can expend less effort and produce more of it. Cryptoccurrencies with PoW are the opposite, where you produce fewer of it in the proportion of effort expended.

                          1. 2

                            How much more memory efficient is Merit (on the scale of the top 100 countries electricity consumption)?

                            The article points out that ASIC miners have found ways of solving algorithms that have previously been thought to be resistant to a bespoke hardware solution.

                            Consuming the same amount of electricity as a large’ish country is certainly fascinating.

                            1. 4

                              Warning! this will be a bummer reply, nothing I will say here will be uplifting…..

                              Notice of course, the difference between the #1 country, and the #2 country is large. It likely follows zipf’s law. The issue with ASICs is that they are not accessible to acquire and therefore insiders get access to them first and have a huge advantage. It’s anathema to the goal of having anyone download the software and mine.

                              In the scheme of things, the amount of electricity used to mine cryptocurrencies pales in comparison to the amount of electricity wasted on countless other things. We should just acknowledge that there is something fundamentally wrong with the global economic system that allows for gross externalities that aren’t accounted for. And that there is such a gross disparity of wealth where some countries have such excess capacity for electricity while others struggle with brownouts and blackouts every day.

                              Global warming itself is an incredibly complex problem. Using a slow scripting language for your software? How much hardware are you wasting running at scale? Buying a Tesla? Too bad your electricity is likely dirty, and the production caused 5 years worth of CO2 a normal car puts out. Switching to solar and wind? Too bad the air will be cleaner causing more sunlight to hit the earth heating it up faster because even stopping now, we have decades of warming built in, and that a cleaner atmosphere accelerates that warming.

                              Global warming is such an insanely difficult, complex, and urgent problem that we are missing the forest for the trees.

                              Cryptocurrencies are not tackling the problem of Global Warming, but so aren’t most technologies we are creating every day. I would love to hear how many people on Lobsters are tackling global warming head on? I suspect almost zero. And isn’t that just the most depressing thing? It is for me, I think about this every day when I look at my children.

                              EDIT: Holy poop I was right, totally zipf’s law https://en.wikipedia.org/wiki/List_of_countries_by_electricity_consumption .

                              1. 9

                                NB: this may be ranty ;)

                                In the scheme of things, the amount of electricity used to mine cryptocurrencies pales in comparison to the amount of electricity wasted on countless other things.

                                how about not doing things which have currently no value for society, except from being an item for financial speculation, and burning resources. that would be a start. i still have to see a valid application of cryptocurrencies which really works. hard cash is still a good thing which works. it’s like voting machines: they may kinda work, but crosses made with a pen on paper are still the best solution.

                                the electricity wasted on other things is due to shitty standby mechanisms and lazyness. these things can be fixed. the “currency” part of “cryptocurrency” is to waste ressources, which can’t be fixed.

                                Global warming itself is an incredibly complex problem.

                                so-so.

                                Using a slow scripting language for your software? How much hardware are you wasting running at scale?

                                see the fixing part above. fortunately most technology tends to get more efficient the longer it exists.

                                Buying a Tesla? Too bad your electricity is likely dirty, and the production caused 5 years worth of CO2 a normal car puts out.

                                yeah, well, don’t buy cars from someone who shoots cars into orbit.

                                Switching to solar and wind? Too bad the air will be cleaner causing more sunlight to hit the earth heating it up faster because even stopping now, we have decades of warming built in, and that a cleaner atmosphere accelerates that warming.

                                the dimming and warming are two seperate effects, though both are caused by burning things. cooling is caused by particles, while warming is caused by gases (CO2, CH4, …). there are some special cases like soot in the (ant)arctic ice, speeding up the melting. (cf. https://en.wikipedia.org/wiki/Global_cooling#Physical_mechanisms , https://en.wikipedia.org/wiki/Global_warming#Initial_causes_of_temperature_changes_(external_forcings) )

                                Cryptocurrencies are not tackling the problem of Global Warming, but so aren’t most technologies we are creating every day. I would love to hear how many people on Lobsters are tackling global warming head on? I suspect almost zero. And isn’t that just the most depressing thing? It is for me, I think about this every day when I look at my children.

                                as global warming doesn’t has a single cause, there isn’t much to do head on. with everything theres a spectrum here. some ideas which will help:

                                • don’t fly (less CO2).
                                • buy local food when possible, not fruit from around the globe in midwinter. don’t eat much meat (less CO2, CH4, N2O).
                                • use electricity from renewable sources (less CO2).

                                those things would really help if done on a larger scale, and aren’t too hard.

                                1. 2

                                  how about not doing things which have currently no value for society, except from being an item for financial speculation, and burning resources. that would be a start. i still have to see a valid application of cryptocurrencies which really works.

                                  Buying illegal goods through the internet without the risk of getting caught by the financial transaction (Monero and probably Bitcoin with coin tumblers).

                                  1. 4

                                    mind that i’ve written society: a valid reason are drugs, which shouldn’t be illegal but be sold by reliable, quality controlled suppliers. i think other illegal things are illegal for a reason. additionally, i’d argue it’s risky to mail-order illegal things to your doorstep.

                                    1. 2

                                      cryptocurrencies solve a much harder problem than hard cash, which is they have lowered the cost of producing non-state money. Non-state money has existed for thousands of years, but this is the first time in history you can trade globally with it. While the US dollar may be accepted almost everywhere, this is not true for other forms of cash.

                                      1. 4

                                        but what is the real use case?

                                        • if globalized trade continues to exist, so will the classic ways of payment. cryptocurrencies are only useful in this case if you want to do illegal things. there may be a use case in oppressed countries, but the people tend to have other problems there than to buy things somewhere in the world.

                                        • if it ceases to exist, one doesn’t need a cryptocurrency to trade anywhere in the world, as there is no trade.

                                        i’m not a huge fan of the current state of the banking system, but it is a rather deep local optimum. it bugs me that i have to pay transaction fees, but thats the case with cryptocurrencies, too. i just think that while theoretically elegant, cryptocurrencies do more harm than good.

                                        anecdote: years ago, i payed for a shell account by putting money in an envelope and sending it via mail ;)

                                        1. 2

                                          Cryptocurrencies are a transvestment from centralized tech to decentralized. It’s not what they do, but how they do it that’s different. It’s a technology that allows the private sector to invest in decentralized tech, where in the past they had no incentive to do so. Since the governments of the world have failed so miserably to invest in decentralized technology in the last 20 years, this is the first time that I can remember where the private sector can contribute to building decentralized technology. Note cryptocurrencies are behind investments of decentralized storage, processing, and other solutions, where before the blockchain, they would have been charity cases.

                                          The question you can ask is, why not just stick with centralized solutions? I think the argument is a moral one and about power to the people, vs to some unaccountable 3rd party.

                                          1. 1

                                            It’s a technology that allows the private sector to invest in decentralized tech, where in the past they had no incentive to do so.

                                            i still don’t see exactly where the cryptocurrencies are required for investment in decentralized technology. we have many classic systems which are decentralized: internet (phone before that), electricity grid, water supply, roads, etc. why are cryptocurrencies required for “modern” decentralized systems? it just takes multiple parties who decide that it is a good solution to run a distributed service (like e-mail). how it is paid for is a different problem. one interesting aspect is that the functionality can be tightly coupled with payments in blockchainy systems. i’m not convinced if that is reason enough to use it. furthermore some things can’t be well done due to the CAP theorem. so centralization is the only solution in these cases.

                                            Note cryptocurrencies are behind investments of decentralized storage, processing, and other solutions, where before the blockchain, they would have been charity cases.

                                            I’d say that the internet needs more of the “i run it because i can, not because i can make money with it” spirit again.

                                            1. 1

                                              i still don’t see exactly where the cryptocurrencies are required for investment in decentralized technology.

                                              You are absolutely right! It isn’t a requirement. I love this subject by the way, so let me explain why you are right.

                                              we have many classic systems which are decentralized: internet (phone before that), electricity grid, water supply, roads, etc. why are cryptocurrencies required for “modern” decentralized systems

                                              You are absolutely right here. In the past, our decentralized systems were developed and paid for by the public sector. The private sector, until now, failed to create decentralized systems. The reason we need cryptocurrencies for modern decentralized systems is that we don’t have the political capital to create and fund them in the public sector anymore.

                                              If we had a functioning global democracy, we could probably create may systems that “i run it because i can, not because i can make money with it”.

                                              That spirit died during the great privatization of computing in the mid 80s, and the privatization of the internet in the mid 90s.

                                  2. 2

                                    I love rants :-) Let’s go!

                                    “currency” part of “cryptocurrency” is to waste ressources, which can’t be fixed.

                                    Some people value non-state globally tradeable currencies. Google alone claims to have generated $238 billion in economic activity from their ads and search. https://economicimpact.google.com/ . The question is, how much CO2 did that economic activity create? Likely far greater than all cryptocurrencies combined. But that’s just my guess. It’s not an excuse, I’m just pointing out we are missing the forest for the trees. People follow the money, just as google engineers work for google because the money is there from ads, many people are working on cryptocurrencies because the money is there.

                                    see the fixing part above. fortunately most technology tends to get more efficient the longer it exists.

                                    While true, since our profession loves pop-culture, most technologies are replaced with more fashionable and inefficient ones the longer they exist. Remember when C people were claiming C++ was slow? I do.

                                    the dimming and warming are two separate effects, though both are caused by burning things.

                                    They are separate effects that have a complex relationship with our models of the earth warming. Unfortunately, even most well-meaning climate advocates don’t acknowledge dimming and that it’s not as simple as changing to renewable resources since renewables do not cause dimming, and god knows we need the dimming.

                                    those things would really help if done on a larger scale and aren’t too hard.

                                    Here is my honest opinion, we should have done this 30 years ago when it wasn’t too late. I was a child 30 years ago. The previous generation gave me this predicament on a silver plate. I do my part, I don’t eat meat because of global warming, I rarely use cars, use public transport as much as possible. Work from home as much as possible. etc, etc,

                                    But I do these things knowing it’s too late. Even if we stopped dumping CO2 in the atmosphere today, we have decades of warming built in that will likely irreparably change our habitat. Even the IPCC assumes we will geoengineer our way with some magical unicorn technology that hasn’t been created yet.

                                    I do my part not because I think they will help, but because I want to be able to look at my children and at least say I tried.

                                    I think one of my next software projects will be helping migrants safely travel, because of one of the biggest tragedies and sources of human suffering as a result of climate change has been the refugee crisis, which is going to increase more.

                                    1. 2

                                      Some people value non-state globally tradeable currencies. Google alone claims to have generated $238 billion in economic activity from their ads and search. https://economicimpact.google.com/ . The question is, how much CO2 did that economic activity create? Likely far greater than all cryptocurrencies combined. But that’s just my guess. It’s not an excuse, I’m just pointing out we are missing the forest for the trees. People follow the money, just as google engineers work for google because the money is there from ads, many people are working on cryptocurrencies because the money is there.

                                      i won’t refute that ads are a waste of resources, i just don’t see why more resources need to be wasted on things which have no use except for speculation. i hope we can do better.

                                      While true, since our profession loves pop-culture, most technologies are replaced with more fashionable and inefficient ones the longer they exist. Remember when C people were claiming C++ was slow? I do.

                                      Javascript has gotten more efficient in the order of magnitudes. Hardware is still getting more efficient. There is always room for improvement. As you’ve written, people go where the money is (or can be saved).

                                      They are separate effects that have a complex relationship with our models of the earth warming. Unfortunately, even most well-meaning climate advocates don’t acknowledge dimming and that it’s not as simple as changing to renewable resources since renewables do not cause dimming, and god knows we need the dimming.

                                      But I do these things knowing it’s too late. Even if we stopped dumping CO2 in the atmosphere today, we have decades of warming built in that will likely irreparably change our habitat.

                                      Dimming has an effect. As reason not to switch to renewable energy it isn’t a good argument. Stopping to pump more greenhouse gasses would be a good start, they tend to be consumed by plants.

                                      […] we will geoengineer our way with some magical unicorn technology that hasn’t been created yet.

                                      lets not do this, humans have a tendency to make things worse that way ;)

                                      1. 1

                                        i hope we can do better.

                                        I don’t think our economic system is setup for that.

                                        Javascript has gotten more efficient in the order of magnitudes. Hardware is still getting more efficient. There is always room for improvement. As you’ve written, people go where the money is (or can be saved).

                                        I think because moore’s law is now dead, things are starting to swing back towards efficiency. I hope this trend continues.

                                        Dimming has an effect. As reason not to switch to renewable energy it isn’t a good argument. Stopping to pump more greenhouse gasses would be a good start, they tend to be consumed by plants.

                                        I didn’t provide dimming as a reason not to switch to renewables, I provided it because JUST switching to renewables will doom us. As I’ve said, there are decades of warming backed in, there is a lag with the CO2 we already put in. Yes, we need to stop putting more in, but it’s not enough to just stop. And in fact, stopping and not doing anything else will doom us faster.

                                        lets not do this, humans have a tendency to make things worse that way ;)

                                        I totally agree. I don’t want countries to start launching nuclear weapons, for example. The only realistic thing that could possibly work is to do massive planting of trees, like I mean billions of trees need to be planted. And time is running out, because photosynthesis stops working at a certain temperature, so many places are already impossible to fix (iraq for example, which used to be covered in thick forests thousands of years ago).

                                        1. 1

                                          I don’t think our economic system is setup for that.

                                          aren’t we the system? changes can begin small, just many attempts fail early i suppose.

                                          And in fact, stopping and not doing anything else will doom us faster.

                                          do you have any sources for that?

                                          The only realistic thing that could possibly work is to do massive planting of trees, like I mean billions of trees need to be planted. And time is running out, because photosynthesis stops working at a certain temperature, so many places are already impossible to fix (iraq for example, which used to be covered in thick forests thousands of years ago).

                                          well, if the trends continues, greenland will have some ice-free space for trees ;) just stopping deforestation would be a good start though.

                                          1. 1

                                            aren’t we the system?

                                            We did not create the system, we were born into it. To most, they see it as reality vs a system that’s designed.

                                            do you have any sources for that?

                                            https://www.sciencedaily.com/releases/2017/07/170731114534.htm

                                            well, if the trends continues, greenland will have some ice-free space for trees ;) just stopping deforestation would be a good start though.

                                            Sorry if I’m wrong, but do I sense a bit of skepticism about the dangers we face ahead?

                                  3. 5

                                    That was such a non-answer full of red herrings. He wanted to know what your cryptocurrency’s electrical consumption is. It’s positioned as an alternative to centralized methods like Bitcoin is. The centralized methods running on strongly-consistent DB’s currently do an insane volume of transactions on cheap machines that can be clustered globally if necessary. My approach is centralized setup with multiple parties involved checking each other. Kind of similar to how multinational finance already works but with more specific, open protocols to improve on it. That just adds a few more computers for each party… individual, company, or country… that is involved in the process. I saw a diesel generator at Costco for $999 that could cover the energy requirements of a multi-national setup of my system that outperforms all crypto-currency setups.

                                    So, what’s the energy usage of your system, can I participate without exploding my electric bill at home (or generator), and, if not, what’s the justification of using that cryptosystem instead of improving on the centralized-with-checking methods multinationals are using right now that work despite malicious parties?

                                    1. 3

                                      How much more memory efficient is Merit (on the scale of the top 100 countries electricity consumption)?

                                      Sorry, That’s his question. I can answer that easily, it’s not on that scale. My interpretation of that question was that he was making a joke, which is why I didn’t answer it. If derek-jones was serious about that question, I apologize.

                                      As I mentioned, the algorithm is memory bandwidth bound, I’m seeing half the energy cost on my rig, but I need to do more stringent measurements.

                                      1. 1

                                        More of a pointed remark than a joke. But your reply was full of red herrings to quote nickpsecurity.

                                        If I am sufficiently well financed that I can consume 10M watt of power, then I will always consume 10M watt. If somebody produces more efficient hashing hardware/software, I will use it to generate more profit, not reduce electricity consumption. Any system that contains a PoW component creates a pushes people to consume as much electricity as they can afford.

                                        1. 1

                                          If somebody produces more efficient hashing hardware/software, I will use it to generate more profit, not reduce electricity consumption.

                                          This is true for any resource and any technology in our global economic system.

                                          I wasn’t trying to reply with red herrings, but to expand the conversation. It’s really interesting that people attack cryptocurrencies for wasting electricity when there is a bigger elephant in the room nobody seems to want to talk about. Everyone knows who butters their bread. Keep in mind I’m not defending wasting electricity, but focusing on electricity is like, to use a computer analogy, focussing only on memory and creating garbage collection to deal with it, while ignoring other resources like sockets, pipes, etc. That’s why I like C++, because it solves the problem for ALL resources, not just one. We need a C++ for the real world ;-)

                                  4. 2

                                    I answered your question more directly, see response to nickpsecurity.

                                1. 3

                                  Nearly all the data comes from theGNU/Linux Distribution Timeline, which sadly has not be kept up to date.

                                  1. 3

                                    As an ex-compiler writer have have plenty of sympathy for those trying to make a living selling compilers, but times change.

                                    The problem used to be competition from many small compiler companies, then the amount of memory available on developer machines exploded, which is the critical factor holding back open source compilers.

                                    1. 1

                                      I’ve updated the title to include the date, since it matters in the context of the paper.

                                    1. 7

                                      An extreme example of unportable C is the book Mastering C Pointers: Tools for Programming Power, which was castigated recently. To be fair, that book has other flaws rather than being in a different camp, but I think that fuels some of the intensity of passion against it.

                                      This… rather grossly undersells how much is wrong with that book. The author didn’t understand scope, for crying out loud, and never had a grasp of how C organized memory, even in the high-level handwavy “C Abstract Machine” sense the standard is written to.

                                      There are better examples of unportable C, such as pretty much any non-trivial C program written for MS-DOS, especially the ones which did things like manually writing to video memory to get the best graphics performance. Of course, pretty much all embedded C would fit here as well, but you’ll actually be able to get and read the source of some of those MS-DOS programs.

                                      In so doing, the committee had to converge on a computational model that would somehow encompass all targets. This turned out to be quite difficult, because there were a lot of targets out there that would be considered strange and exotic; arithmetic is not even guaranteed to be twos complement (the alternative is ones complement), word sizes might not be a power of 2, and other things.

                                      Another example would be saturation semantics for overflow, as opposed to wraparound. DSPs use saturation semantics, so going off the top end of the scale plateaus, instead of causing a weird jagged waveform.

                                      As for the rest, it’s a hard problem. Selectively turning off optimization for specific functions would be useful for some codebases, but aggressive optimization isn’t the only problem here: Optimization doesn’t cause your long type to suddenly be the wrong size to hold a pointer on some machines but not others. Annotating the code with machine-checked assumptions about type size, overflow behavior, and maybe other things would allow intelligent warnings about stupid code, but… well… try to get anyone to do it.

                                      1. 6

                                        Re “Mastering C Pointers,” that’s fair. I included it because it’s one of the things that got me thinking about the unportable camp, but I can see how its (agreed, very serious) flaws might detract from the overall argument I’m making and that there might be a better example.

                                        Re saturating arithmetic, well, Rust has it :)

                                        1. 2

                                          My interpretation is that the point of C is that simple C code should lead to simple assembly code. Needing to write SaturatedArithmetic::addWithSaturation(a, b) instead of just a + b in all arithmetic DSP code would be quite annoying, and would simply lead to people using another language.

                                          You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behaviour. The only construct in C that can hide complexity is the function call, which everyone recognises. But if you see some arithmetic, you know it’s just arithmetic.

                                          1. 1

                                            You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behavior. The only construct in C that can hide complexity is the function call, which everyone recognizes. But if you see some arithmetic, you know it’s just arithmetic.

                                            Not to mention that not everything can be overloaded, causing inconsistencies, and some operations in mathematics have operators other than just “+-/*”. Vector dot product “·”, for example. Even if CPP (or any other language) extends to support more operators, these operators can’t be reached without key composition (“shortcuts”), making it almost unwanted. vec_dot() might require typing more, but it’s reachable to everyone, and operators don’t need to have hidden meanings.

                                            1. 2

                                              Eh, perl6 seems to do just fine with 60 bajillion operators.

                                              1. 2

                                                Perl does have more operators than C, but all of them are operators that can be typed using simple key composition, such as [SHIFT+something]. String concatenation for example.

                                                My point, added with what @milesrout said, is that some operators (math operators) aren’t easy to type with just [SHIFT+something]. As result, operator overloading in languages that offer operator overloading will always stay in a unfinished state, because it will only compromise those operators that are easily composed.

                                        2. 1

                                          Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.

                                          Herbert Schildt’s C: The Complete Reference is often touted as the worst C book ever and here.

                                          Perhaps Mastering C Pointers is the worst in its niche (i.e., pointers) and Schildt’s is a more general worst?

                                          1. 2

                                            Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.

                                            So? One of the dangers of picking the wrong textbook is thinking it’s great, and using it to evaluate subsequent works in the field, without knowing it’s shit. Per hypothesis, if it’s your first book, you don’t know enough to question it, and if you think it’s teaching you things, those are the things you’ll go on to know, even if they’re the wrong things. It’s a very pernicious bootstrap problem.

                                            In this case, the book is objectively terrible. Other books being bad don’t make it better.

                                            I do agree that Schildt’s book is also terrible.

                                        1. 4

                                          It’s a terrible source. As the wiki says: “The C rules and recommendations in this wiki are a work in progress and reflect the current thinking of the secure coding community. Because this is a development website, many pages are incomplete or contain errors. “

                                          My review of the 2008 edition of the published guidelines pointed out that they were full of errors and omissions. So now they have started wiki for people to fix their problems.

                                          1. 8

                                            Amusing that this type of person goes unnoticed when many of us tell people what we do. I fall into this category as I was describing in another thread. What she misses is many of us aren’t rich or the work sustainable: many people in lower to middle classes sacrificing money or status to do deep research and development in a field that interests them and/or they find necessary. They might also not like the priorities of commercial or government funding groups.

                                            For instance, I thought figuring out how to make computers that don’t fail or get hacked was a thing we desperately needed. I believed both livelihoods and lives were at stake. That we had access to them was a social good that neither the markets nor FOSS were really serving. It was also an interesting, deep, rabbit hole of a problem crossing many sub-fields of IT, economics, and psychology. That she misses people without money doing it altruistically surprises me more given she wrote the report on FOSS developers working with little to no money or contributions on critical stuff that mattered to them. Same kind of thing I think with different work output.

                                            Still a good write-up that will draw attention to the concept. We might get more people doing it or publishing what they’re doing. I think most of us don’t publish enough. We should accept some of the troubles of that since the ideas get out there more. I also like this quote focusing on the obsessive nature of deep, independent research:

                                            “I understand, then, why researchers flock to the safety of institutions. Imagine studying something that nobody else is studying, for reasons you can’t really articulate, without knowing what the outcome of your work will be. For the truly obsessed person, the need for validation isn’t about ego; it’s about sanity. You want to know there’s some meaning behind the dizzying mental labyrinth that you simultaneously can’t escape and also never want to leave.”

                                            1. 3

                                              I’ve been kicking around the idea of creating a community for independent researchers. At first I thought it’d be mostly PL oriented but I’m starting to think that broadening the reach is better for both emotional support and cross-pollination of ideas. After all, it’s not like the world is teeming with independent researchers, right?

                                              Would you (and anyone else!) be interested in this?

                                              1. 3

                                                Such a community could be great.

                                                There are people doing research for their own private interest and are not setting out to discover anything that is not already known (but might do so accidentally). I would put people who invent new programming language sin this category.

                                                There are people doing research to discover stuff that is not already known, or at least nothing seems to have been published anywhere).

                                                My interest is in discovering stuff that is not yet known.

                                                1. 1

                                                  It’s an interesting idea. I probably wouldn’t join one right now given I’m too overloaded. Maybe later on.

                                                  However, it reminds me of another idea I had for CompSci where there would be a similar site having researchers at lots of universities (or independent) in forums where they could talk about stuff. Also, the non-paywalled papers would be available on it. Any new people at conferences that seemed bright would be invited. My idea was to break the silos that are hiding good ideas to facilitate cross-pollination among institutions and sub-fields.

                                                2. 3

                                                  What she misses is many of us aren’t rich or the work sustainable: many people in lower to middle classes sacrificing money or status to do deep research and development in a field that interests them and/or they find necessary. They might also not like the priorities of commercial or government funding groups.

                                                  thank you for this. as someone who gave up the salaried lifestyle to pursue open source contribution, research, and my local community, it is refreshing to hear. even among very close peers and friends, there is a huge misconception that anyone at the upper end of their technical field is comfortably making ends meet, but in reality we often live a lifestyle that more closely resembles a starving, sleep-deprived graduate research student.

                                                1. 4

                                                  Research in software engineering does not need lots of hardware, the funding needed to be a gentleman scientist in this field is money to live and basic computing equipment.

                                                  The startup cost is high, after all it is necessary to be reasonably expert in the field. But again, this is the personal time needed to spend time reading a lot and gaining practical experience.

                                                  1. 2

                                                    Even lower barrier than that: many bright folks that were hackers or makers that I met in rural areas were on food stamps or living with someone unemployed without a car. They could usually get a Wifi-enabled phone or old laptop that they could use at nearby McDonalds or something. Many use data plans, too, just cuz cell service is a necessity to them. At one point, I went lower having no PC, phone, or car. Designed on paper or dirt depending on where we were at.

                                                    The reading and practicing like you said is what gave us skill. I think peer review and support is just as important, though. I had lots of it once I got online. There’s quite a few people out there probably stuck on some subjects, reinventing wheels, or chasing dead ends just because they can’t talk to experienced people.

                                                  1. 4

                                                    It feels like we are through the valley of releases that needed to restore Python 2 compatibility. Good to see releases with a few innovations that will make my life easier.

                                                    1. 2

                                                      Some interesting data on Python 2/3 usage and transition (or not):

                                                      1. 4

                                                        I admit I didn’t read through it but only skimmed over the pages, I think they measure open source libraries mostly, and I kind of expect those eto maintain compatibility for a while. Some still do for Python 2.6. So while their data is valid, it doesn’t say much about actual Python applications / serices.

                                                        My subjective take on Python 3 migrations

                                                        • Migration didn’t really happen before Python 3.4. With 3.0 and 3.2 there wasn’t really a “win”, and linux distributions still had Python 2.7 as their default.
                                                        • In the beginning of 2017 I spent a few days getting the CI infrastructure at my then employer (~100 Python devs) into shape. When the docker builds were busy, I spent some time applying python-modernize on a few of the companies shared libraries, then fixing a few remaining issues by hand. It wasn’t much trouble.
                                                        • In 2018 I see more and more Python projects starting off of Python 3.x. When in 2015, developers would have chosen 2.7 if in doubt.

                                                        Oh, and I am still furious about them renaming .iteritems() to .items() instead of at least leaving it as an alias.

                                                    1. 8

                                                      I have always thought that Herbert Schildt’s C books contained more mistakes than any other C books out there.

                                                      Clive Feather’s ‘famous’ review and a review of a later book by seebs (of obfuscated fame): C: The complete nonsense

                                                      1. 3

                                                        Indeed, Schildt’s C books are quite widely maligned. Unfortunately I read them when I was at school, keen to learn C - I didn’t know any better at the time (this was in the days when the web had only just been invented and hadn’t left CERN yet). Herb Schildt, Ray Duncan, Charles Petzold (Windows logo tattoo and all) and Michael Abrash were the programming heroes of my youth.

                                                      1. -5

                                                        It is only a disaster if your business relies on making use of other people work, in which they own the copyright.

                                                        Not everybody can afford to create stuff and give it away for free, and there are plenty of people who want to earn money from there creative work.

                                                        Those who have made a living from steeling other peoples’ material are up in arms that their free lunch not going to be free anymore.

                                                        1. 17

                                                          Or you run any kind of site where users can input anything that another visitor can see. Not just video and file sharing sites; Lobsters users could paste copyrighted content into a comment/PM and I’d be liable for not having a system implementing some kind of copyright controls.

                                                          (To say nothing of Article 11 wanting us to start paying the news sites we link to for privilege of sending them traffic.)

                                                          1. -2

                                                            If somebody posted something here that I owned the copyright to, and I asked Lobsters admin to remove the material, then I imagine they would. If somebody kept posting this material they could be banned.

                                                            Or are you saying that the Lobsters’ site should be a place where anybody can post copyright material, without any recourse by the copyright holder?

                                                            1. 13

                                                              The new law changes this standard safe harbor behavior. Lobsters (me) is presumptively at fault for copyright infringement for not proactively checking for possibly-copyrighted material before posting. So yes, your scenario is the current, reasonable law and accurately describes why everyone is concerned about this change.

                                                              1. -2

                                                                Lots of FUD being generated by those who will lose out. Copyright holders not making much noise about the fact they will probably make some money (or rather lose less).

                                                                Some good points about what is going on.

                                                              2. 4

                                                                The law isn’t about that, though. The new law doesn’t say admins must take-down on request (that’s already the case under existing law) but rather that they must have an AI system that prevents any infringing uploads from happening in the first place.

                                                                The link tax is a much bigger problem, especially lobsters, but both articles are very bad.

                                                                1. 1

                                                                  AI system that prevents any infringing uploads from happening in the first place.

                                                                  How is that any different from what @pushcx said? As the owner/operator of lobste.rs he would have to abide by this law and produce, or buy access to some sort of copyrighted work database in order to test for it for all content that is created on lobsters.

                                                                  That’s not going to make it easy for startups. That’s not going to make it easy for privately owned, independent side projects. That’s just going to hurt.

                                                                  1. 2

                                                                    ALSO, you’d better not quote any part of my message if you reply, because I could, apparently, legitimately sue lobsters for not enforcing my copyright. e.g. there’s no such thing as fair use anymore.

                                                                    (yes, that’s a stretch, but that seems to be the basic threat)

                                                                    1. 1

                                                                      I replied before @pushcx and yes, it seems we agree on how bad it is :)

                                                                      1. 2

                                                                        Blargh! I am sorry. I misread the thread and thought you were replying to pushcx.

                                                              3. 6

                                                                Or lobster gets a fine when you submit a link to any European news sites.

                                                                1. 1

                                                                  What’s worse is that people will devise a way to signal what content is linkable and what only with license. This will limit quality news dissemination and strengthen fake news position. This will help to kill EU. Sad, right?

                                                                2. 1

                                                                  most probably that lobster will be not able to post most of the links

                                                                1. 2

                                                                  Ok, the summer project has just started and he has not read the standard yet.

                                                                  undefined behavior: behavior, upon use of a nonportable or erroneous program construct or of erroneous data, for which this International Standard imposes no requirements”

                                                                  Let’s hope he reads the C standard before implementing any checks. A freely available pdf that is not the actual C standard you buy in the shops (which has a fancy ISO cove r on it)

                                                                  1. 2

                                                                    “A minimalist knowledge approach to software engineering is cost effective because most code does not exist long enough to make it worthwhile investing in reducing future maintenance costs. Yes, it is more expensive for those that survive to become commonly used, but think of all the savings from not investing in those that did not survive.”

                                                                    This is something that I’m probably going to have to think more on. @derek-jones might even have data to support it in his collection. My data, though, indicated that most real-world projects from the 1960’s on up to present times run into problems late in the lifecycle they have to fix. Those fixes usually cost more in money or reputation. Some groups spent small, upfront investment preventing most problems like that. They claim it usually paid off in various ways. This was especially true if software was long-lasting. There were times when the quality cost more overall on top of a thrown-together project.

                                                                    Another issue is is that pervasively-buggy software conditioned users to expect that it’s normal. This reduces demand or competitive advantage of high-quality, mass-market software. Many firms, esp small or startups, can profitably supply buggy software so long as it meets a need and they fix the bugs. In enterprise market, you can even sell software that barely works or doesn’t at all so long as it appeared to meet a need making someone in the company look good. So, this needs to be factored into the decision of whether to engineer software vs throw it together.

                                                                    I still say lean toward well-documented, easy-to-change software just in case you get stuck with it. You can also charge more in many markets with better rep. Use the amount of QA practices that the market will pay for. If they pay nothing, use stuff that costs about nothing like interface checks, usage-based testing, and fuzzing. If they’ll pay significant amount, add more design/code review, analysis/testing, slices of specialist talent (eg UI or security), improvements to dependencies, and so on.

                                                                    1. 4

                                                                      Cost/benefit for applications, there is also a less rigorous analysis.

                                                                      Code containing a fault is more likely to be modified (removing the fault as a side-effect) than have the fault reported (of course it may be experienced and not reported); see Figure 10.75.

                                                                      Other kinds of related data currently being analysed.

                                                                      Microsoft/Intel is responsible for conditioning users to treat buggy software as normal. When companies paid lots of money for their hardware, they expected the software to work. Success with mass market software meant getting getting good enough software out the door, or be wiped out by the competition.

                                                                      1. 2

                                                                        I think IBM’s mainframe data might not fit your argument. IBM kept coming up with things like Fagan Inspections, Cleanroom, formal specs, and safe languages. They often experimented on mainframe software. A good deal of it was written in high-level languages like PL/I and PL/S that prevent many problems a shop using C might have. They have lifecycles that include design and review steps. In other words, IBM was regularly doing upfront investments to reduce maintenance costs down the line. The investments varied depending on which component we’re talking about. The fact they were trying stuff like that should disqualify them, though, as a baseline. A much better example would be Microsoft doing fast-moving, high-feature development in C or C++ before and after introducing SDL and other reliability tools. It made a huge difference.

                                                                        Other issues are backward compatibility and lock-in. The old behavior had to be preserved as new developments happened. The two companies also made the data formats and protocols closed-source, complicated ones to make moves difficult. The result is that both IBM and Microsoft eventually developed a customer base that couldn’t move. Their development practices on maintenance side probably factor this in. So, we might need multiple baselines with some allowing lock-in and some being companies that can loose customers at any time. I expect upfront vs fix or change later decisions to be more interesting in the latter.

                                                                        1. 2

                                                                          The data is on customer application usage that ran on IBM mainframes (or at least plug compatibles).

                                                                          1. 1

                                                                            Oh Ok. So, mainframe apps rather than mainframe systems themselves. That would be fine.

                                                                    1. 20

                                                                      Having got one business prediction right, I will stick my neck out and make another one (plus the obvious one that the world will not end because of this purchase).

                                                                      1. -3

                                                                        Since we are doing wild predictions … Here is one, I’ll stick a neck out on … You’re probably young, early in your career, say <5 experience in the industry. Certainly not in the industry since the 90’s early 2000s. It’s fine, there is nothing I can say on this topic that will change anything whatsoever. But save this thread. Queue it up for the day after your 30 B-day or say in 7 years. You’ll be amused between what the you of today believe and what the you of tomorrow will have learned in this industry and specifically about this purchase.

                                                                        1. 22

                                                                          Checking that one is too easy after he linked to a blog with posts dating back 10 years. And checking out posts from 2008… There it says having not taught programming for 25 years.

                                                                          1. 19

                                                                            IIRC @derek-jones was on the C99 standardization committee.

                                                                            1. 3

                                                                              The beauty of predictions is there capability of being wrong. I was wrong, surprised to be so, but wrong.
                                                                              However, another prediction is still undecided, the impact of MS buying Github and how they will manipulate their influence over it compared to the counterfactual. I’m seriously not a tin foil hat kinda guy, but MS is just never a good thing when then step into any area whether the Internet, Browers, Software Development, OS’s, you name it. It is always a net-net-negative (not from a business standpoint of course) but from an overall “good” in that respective area. Far more harm than good will result.

                                                                              1. 5

                                                                                I still don’t get your reply to Derek. He never claimed that MS purchases are good for the community. In fact, he is predicting an EDG buyout solely because he thinks it will allow for vendor lock-in.

                                                                                1. 4

                                                                                  I believe Derek triggered him with the line

                                                                                  (plus the obvious one that the world will not end because of this purchase)

                                                                                  Where he propably refers to his experience from earlier (pre-git, pre-internet) times and how there will be other ways for open source and development (back to Mailing lists, Gitlab, …).

                                                                                  But Grey, when hearing about GitHub not being changed too much (as Derek also stated in his posting, “sluggish Integration”, but also “more data friendly”), remembered history on Microsoft (they were anti-open-source and are working a lot on changing their image). GitHub being an “open-source community” therefore is in danger getting swallowed by this “anti-open-source business”.

                                                                                  And I can understand getting emotional about such things. And emotion kills rationality. Which propably led to this misunderstanding.

                                                                          1. -2

                                                                            That domain name is the worst thing ever, so many hyphens

                                                                            1. 9

                                                                              No, that’s this one.

                                                                          1. 2

                                                                            I’d offer to buy the companies or groups controlling most of the mining power. It’s usually an oligopoly like in the for-profit, non-crypto markets. Even possible to pay off individual executives to cut deals to lower the price. The resulting buy would probably be way, way, way less than $100 billion for Bitcoin. Hell, it might be less than a billion. That’s owning it, too, rather than a bribe to sabotage it in some way that looks like an accident. That could be mere millions.

                                                                            1. 2

                                                                              Mining power is actually quite decentralized, because for various reasons cheap electricity is decentralized. Mining pool is oligopoly, but mining pools don’t own mining power. Miners can and will leave pools if something happens.

                                                                              1. 2

                                                                                No, mining power is very centralized.

                                                                                https://lobste.rs/s/yawv5u/state_cryptocurrency_mining

                                                                                1. 1

                                                                                  Your link agrees with me. To quote:

                                                                                  Mining farms are perhaps the one area where manufacturers and economies of scale are not dominant. Good electricity deals tend to come in smaller packages, tend to be distributed around the world, and tend to be difficult to find and each involve unique circumstances.

                                                                                  1. 1

                                                                                    The link agrees with one point, good electricity deals. But the economics of scale applies to the building of devices that use less electricity.

                                                                                2. 1

                                                                                  I appreciate clarification.