There are PoW-less cryptocurrencies being developed. If these gain traction and turn out to be secure, then we can leave behind the first generation of cryptos based on PoW.
These are the two I know of, which also seem to have serious teams backing them up.
Nano: https://nano.org/en Iota: https://www.iota.org/
They’re based on new architectures that enable to dispense with the concept of miner, providing the services that these gave in different manners. For example, Nano uses proof of stake. When there are two conflicting transactions i.e. a double spend, the network votes to resolve the conflict and each node has a voting stake proportional to the the amount of currency it holds, or the amount of accounts that delegate their vote to that particular node. Thus conflicts are resolved through vote. Iota uses a DAG architecture where the cost of making a transaction is doing PoW in the form of “confirmations”. The transactions that are more robust are those with the larger number of confirmations. Both currencies have a set supply so no new coins will be produced ever, this means that all the coins that will exist were generated in the first block.
The problem with proof of stake is that once an entity has 51% they own the currency forever. With proof of work, it is a continual effort to own 51% (this is covered in the linked to article).
A quick look at IOTA (not knowing anything about it), and it does not involve a blockchain and it’s not on Wikipedia.
I see both PoW and PoS as protocols depending on the rationality assumption. Those that hold the power will act rationally, thus will want to preserve the value of their investment and as a consequence protect the network. Without the rationality assumption, we could have the top N miners combining their hashing power to destroy the network. What stops them from doing this?
Whether IOTA or Nano are blockchain or not isn’t important I think, what matters is that they satisfy (theoretically, and Nano somewhat practically) certain properties that allow them to function as decentralized cryptocurrencies.
Really good series. I strongly recommend it if you’re into machine learning and don’t have a strong understanding of what linear algebra means geometrically.
I’m most of the way through the series and highly recommend it as well. I have so much better intuition for these things than I did after just taking a pretty bad linear algebra class in college.
I think the developer is going through a phase every programmer should go through. At first when you don’t have much experience, you don’t have the knowledge to judge different methodologies, frameworks or whatever. All you can do is follow the crowd and use what is popular. Once you’ve gained enough expertise, you start to become sceptical about what you read, in fact, you might be completely reluctant to hear anyone else’s praise of the next big thing that will solve your problems. So for some time you stick with what you know and stop trying new things. I think the developer is at this stage. Afterwards, one starts to understand that trying new tools is important, but not all tools are good, there are no one-size-fits-all solutions, everything must be used in the proper manner, time and place. And the ability to use the right framework or methodology when it is objectively the best choice, is a large part of what makes a good programmer.
It seems like many of the issues in this article and those like it can be attributed to the absurdly low barrier to entry for software companies. If you have no money, no likely revenue, and nothing holding you accountable, are you a company or a club? Bio-tech startups certainly don’t work this way.
Tangentially related: I don’t remember ever hearing anyone say “How can we get more people into chemical engineering?” or seen any Learn-to-play-with-deadly-chemicals-in-12-weeks “engineering” courses.
The article’s critique was anchored specifically on the comments of the founders of Paypal, Facebook and 42Floors. “No money, no likely revenue, and nothing holding you accountable” doesn’t really seem to apply.
Bio-tech startups would seem to have diversity problems that mirror those of software startups fairly closely.
“You can’t underdress to a Bio-tech interview” He was failing the come-over-for-dungeons-and-dragons test and he didn’t even know it…
I know of bio-tech startups that behave very similarly to this. Perhaps not in the clubby, hipster way, but in a manner that suggests a similar lack of professionalism in social aspects.
I don’t remember ever hearing anyone say
There are certainly efforts in other fields to encourage more people to get into them.
But, and maybe this is just personal bias, chemical engineering does not underlie so much of modern socio/political/economic power in the way that software does. Software is eating the world, and without some kind of base literacy, it’s increasingly hard to understand what’s going on around us. That’s why I think more people should have at least a tiny understanding of how computing works.
I would maybe argue that this is personal bias. Understanding chemistry is the basis of petroleum engineering, pharmaceuticals, materials science, and a bunch of other really important stuff (generating and storing electrical energy efficiently, fertilizers/pesticides, etc.). Between them those things underlie a vastly larger fraction of socio/political/economic power in the world than software.
I think it’s easy to get an inflated idea of the importance of something (e.g. software) when you live and breathe it every day.
If I had to guess why there’s such an outsize effort to get people into software I would probably say it’s just because there happens to be a labor shortage at the moment, but I don’t really know ¯_(ツ)_/¯.
Replying to both you and your sibling, computing is what ends up controlling all those important things, however. And communication is a greater power than any particular physical good, and that’s all computerized at this point.
I do think all fields are important, regardless.
Supporting @steveklabnik’s point, from the original Software is Eating the World essay:
Oil and gas companies were early innovators in supercomputing and data visualization and analysis, which are crucial to today’s oil and gas exploration efforts. Agriculture is increasingly powered by software as well, including satellite analysis of soils linked to per-acre seed selection software algorithms
Fair point, although you can’t communicate on any modern medium without oil and electricity to power it (nor can you even make a computer!); there’s sort of a chicken and egg thing here :P
I guess I don’t really see the argument that computing is a more reasonable thing for everyone to understand the basics of than chemistry or physics or biology or what have you; all of it is important and runs the the world in some way. Maybe we should encourage people to learn a bit of everything :)
you can’t communicate on any modern medium without oil and electricity to power it
This is absolutely true, and something I worry about way more than I should, probably.
Maybe we should encourage people to learn a bit of everything :)
Yes, very much this. This over-focus on STEM is incredibly harmful :(
From here it looks like most jobs over the next century will require skills we currently think of as software skills. In the same way that most jobs today require some kind of “functional computer literacy”, and most jobs last century required ordinary literacy. The skills of programming seem generally applicable, in a way that something like materials science (while fascinating, and underlying lots of recent engineering advances) isn’t. Will knowing chemistry/physics/biology make you a better accountant/architect/artist? Maybe, but the connection seems more direct and obvious for computing.
I’d tend to agree, which is what I posited in my OP: the push for people to learn computing is more about the labor market than anything else.
I think that’s totally personal bias. Oil has more power than computing could ever dream of and no one is saying “teach all our kids petroleum engineering.”
I think that the importance of software lies on the fact that it works as a tool to enhance people’s capabilities. While other fields may be directly involved in the production of good and services that are fundamental to support our modern lifestyle, software not only has an impact on these fields but in almost everything else we do, be it big or small. Because of this, its reach is, at least, much larger than any other field I know of.
I don’t understand what was the purpose of his letter. What is he hoping to achieve?
All the facts he stated about Google are true because they create great software that makes people’s lives easier. The reason why this happens is because they try to hire the best people they can find offering them all the benefits for which Google is very well known for. The day Google turns evidently evil, people will start leaving and governments will start applying restrictions to the company’s growth, for example.
I suppose that the author of the letter, as many people whose business depend on Google’s services, feel uneasy because they now find that they’re too dependent on the company. Well, who’s to blame for that?
To the author, I would recommend the same that I recommend to friends who feel uncomfortable with Google’s power: duckduckgo, a search engine that in my opinion is very good.
Well, in my opninion the point is you are loosing your freedom to decide by yourself, as he says
recommending to an opponent of nuclear power that he just stop using electricity
or as shown here
Publishers who rely on Google to drive traffic to their websites cannot solve their dependence on Google so easily.
I’m working on my project https://www.collbits.com. Wording still needs more work, since English is not my native language. Feedback is appreciated!
Having been accepted into UC Berkeley’s Ph.D. program, I’ve decided it’s high time to start learning Chisel. I’m working on writing a vector processor in the language. So far, I’ve implemented pipelined floating point units and a bi-directional crossbar switch.
I was actually planning on going to grad school even before taking the full-time position at Amazon. Since I got my undergraduate degree a semester early, I decided to work for the 8 months prior to starting grad school so that I could get experience and save up some money.
I decided to go for the Ph.D. because I find computer architecture really interesting, but a Ph.D. is more or less a basic requirement for being a computer architect.
Development: OSX, iTerm 2, zsh + oh-my-zsh, Vim + spf13, TotalTerminal (when I need to exec one-time commands or if I need to read documentation + try stuff on the command-line at the same time) and git.
Agreed. My solution was to build my own company and keep the idiots out. It’s been working well for the past 10 years, at least.
I’d love to hear about the company and the story behind it – what’s your hiring technique, what kind of projects are you working on, which technologies and processes you use?
I’m trying to do something similar myself and I have to admit we do have our moments resembling the ones in the article (though I like to believe they’re not as drastic :)).
I could write a post, but it would probably be one of the most boring “successful” (so far) business stories around. The basics:
Many of the above may not transfer to a product company, but it’s certainly possible with a carefully-grown, bootstrapped one.
As a software engineering student, while reading I thought to myself: “This won’t happen to me, I’m going to make my own company!”. But thinking it twice, is it really possible to fight the system? Don’t you have deadlines to be met and requirements to be implemented? How do you manage?
Besides programming, a professional developer will also be skilled in managing requirements and deadlines. Absurd requirements and inhuman deadlines aren’t good for business on either end.
For someone who knows absolutely nothing about gaming, World of Warcraft, or this thing in particular… what is this? Can someone explain?
World of Warcraft is a massively popular 16 years old game, and maybe one of the most popular games ever. Since its launch in 2004 it’s been changing and evolving into what it is today, which is something completely different to what it was in its inception. Given that a large number of people would like to play the Vanilla WoW, that is, the first version of the game before any expansion was released, Blizzard has decided to roll out a “classic” version with all the content prior to the first expansion. This expansion was released in late 2006, and since there have been many more. Before the company’s official announcement that they would be releasing this classic version, many requests were made for it by fans but they were turned down by Blizzard citing several arguments such as: “the vanilla wow doesn’t exist anymore since the codebase has continued to evolve” and “Vanilla wow would be looking back and we want to move forward”. However, a vanilla WoW paid server named Nostalrius, maintained by fans for fans gained such popularity that during its peak it had more than 100k players on it. Sadly, it had to be closed in 2016 after Blizzard sent them a cease and desist order. It would seem that from the whole episode the company realized that there was actually a market for a classic WoW and they eventually changed their mind.
World of Warcraft is a popular commercial subscription-based cloud-hosted enterprise legacy app featuring a low grade CRM system married to a highly complex logistics system in a standard 3 tier architecture deployed in a fully sharded configuration. Like many legacy systems, it has undergone significant schema mutation over the course of its deployed lifecycle in response to customer demand. Notably, it started out with a mostly-denormalized schema and, with the advent of improved database performance, a better understanding of the customer base’s requirement envelope, and feature creep, it has moved towards Codd’s 3rd normal form.
As with many legacy apps, some customers’ business needs mandate that they stay pinned to older versions of the app. Interestingly, customers have here asked that an earlier version of a cloud-provided app be made available 12 years later, which poses some interesting issues having to do with incompatible schema migration. Given that the app is also written in a mix of obscure legacy languages, the traditional approach of simply migrating the queries and schema together is technically formidable.
One established practice here is to create a proxy facade layer. In this pattern, you keep the interface to the legacy client application exactly the way it is, but create an intermediate layer which translates the db calls to and from the normalized format. This incurs round trip cost and bugs are common in edge cases, especially in frequently-undocumented minor shifts in API and field meaning, and especially given the expected low coverage of unit and functional tests in a 12 year old codebase. This technique is frequently overused owing to underestimation of the cost and time complexity of ferreting out the edge cases.
The other established practice is to perform a one-time wholesale schema migration, normally done either through an ETL toolchain like Informatica, or more commonly through hand-written scripts. This approach frequently takes more developer time than the facade approach, owing to needing to “get it right” essentially all-at-once, and having a very long development loop.
Whatever the technique used, schema migration programs of this scope need a crisp definition of what success looks like that’s clearly understood by all the involved developers, project managers, data specialists, and product leaders. Too frequently, these types of programs fail owing to incomplete specification and lack of clearly defined ownership boundaries and deliverable dependencies. The industry sector in which this legacy app resides is at greater than average risk for failure of high-scope projects due to fundamental and persistent organizational immaturity and improperly managed program scopes.
Also, they better not nerf fear, because rogues were super OP in vanilla and getting the full 40 down the chain to rag with portals was tough enough.
As someone who levelled through Stranglethorn Vale via painstaking underwater+Unending Breath grinds in order to escape OP rogue stunlock love, I say to you: Bravo Sir!. Also, f**k the debuf cap.