1. 10

    Kubernetes has the ability to run jobs on a cron schedule, and you can launch one off run to completion pods as tasks.

    1. 2

      This is what we do too.

    1. 12

      I built a company on a .net stack a long time ago; it was fine. The biggest problem is SQL Server used to cost a lot of money and was kind of a waste in the face of postgres. As far as the programming ecosystem; C# as a language is great, but so much of the standard library and 3rd party libraries are old and crusty. Some were legitimately great though, like Dapper for talking to the DB, and ASP.net MVC was a decent web framework.

      1. 8

        Yup. Licensing cost is the elephant in the room here. SQL Server, Windows Server itself, IIS, it all adds up pretty quick, and if you’re running a startup, paying for such licenses at scale is maybe not so attractive.

        1. 10

          Exactly. Its unclear why you’d pay for SQLServer and IIS when postgresql and apache or nginix cost nothing to license. Many people would argue that the latter are superior anyway. There are plenty of companies that will provide paid support for those products too.

      1. 1

        WASD 87 key with Cherry MX Blues. Not really ideal in an office environment but most people use headphones and it hasn’t been too bothersome to my neighbors afaik.

        1. 3

          The nodes gossip periodically to ensure the leader is still there. If the leader ever dies, a new leader will be elected through a simple protocol that uses random sleeps and leader declarations. While this is simple and unsophisticated, it is easy to reason about and understand, and it works effectively at scale.

          The sound of three Byzantine generals cackling in the distance was heard right before the point of sale systems mysteriously crashed.

          1. 1

            It reads like a trimmed down version of how Raft elections work less the log shipping.

            1. 1

              Kubernetes uses etcd that implements raft IIRC

            1. 2

              I’ve done a quite small (47 LOC) Haskell solution for this a couple of years ago. Turns out solving Sudokus is quite simple. Blogpost/Code. The basis is just going over the whole grid, pruning possibilities per field, filling in the ones with just one possible solution. Rinse and repeat until you’re done. Very fun exercise to learn a new language as well.

              1. 3

                Um, resolving single candidates is not enough to solve most Sudoku puzzles.

                1. 2

                  Yeah generally on “hard” boards you’ll have to randomly pick a value for a couple, then do a DFS on that choice and subsequent choices.

                  1. 2

                    Depends on the level of inference one is willing to implement.

                    Full inference for each row/column/square individually plus something called shaving/singleton arc consistency/several other names which means to do the hypothesis test for each square/value pair “If this square was assigned this value, would that lead to an inconsistency?” is enough to solve all 9x9 Sudokus without search empirically. For details, see Sudoku as a Constraint Problem by Helmut Simonis.

                    1. 1

                      That just sounds hugely frustrating. I usually just solve them by elimination, but I don’t really do super hard ones.

                1. 2

                  Something I’ve been trying to do lately; get as much of my workflow to work with default vim settings as possible. It helps a ton with portability and basically I can operate a random vim setup on a jumpbox more easily than if I had a bunch of custom maps and functions in my vimrc.

                  1. 9

                    I’m not qualified to make any judgment on the technical merits of several dependency management solutions, but as someone working primarily in Go, the churn of solutions is starting to have a real cognitive cost.

                    1. 6

                      Some of the solutions suggested from a couple of the Go devs in that “thread” sound.. almost surreal to me.

                      My favorite one so far:

                      We’ve been discussing some sort of go release command that both makes releases/tagging easy, but also checks API compatibility (like the Go-internal go tool api checker I wrote for Go releases). It might also be able to query godoc.org and find callers of your package and run their tests against your new version too at pre-release time, before any tag is pushed. etc.
                      https://github.com/golang/go/issues/24301#issuecomment-390788506

                      With all the cloud providers starting to offer pay-by-the-second containers-as-a-service, I see no reason we couldn’t provide this as an open source tool that anybody can run and pay the $0.57 or $1.34 they need to to run a bazillion tests over a bunch of hosts for a few minutes. There’s not much Google secret sauce when it comes to running tests.
                      https://github.com/golang/go/issues/24301#issuecomment-390790036

                      That sounds… kind of crazy for anyone that isn’t Google scale or doesn’t have Google money.
                      Are the Go devs just /that/ divorced from the (non Google) reality that the rest of us live in?

                      1. 10

                        Kind of crazy, but not super crazy. As another example, consider Rust’s crater tool. When the Rust team are trying to evaluate the impact of fixing a syntax quirk or behavioural bug, they make a version of the Rust compiler with the change and a version without, and boot up crater to test every publicly available Rust package with both compiler versions to see if anything breaks that wasn’t already broken.

                        crater runs on Mozilla’s batch-job infrastructure (TaskCluster), and Mozilla is much, much smaller than Google scale. On the other hand, they’re still bigger than a lot of organisations, and I believe a crater run can take a few days to complete, so it’s going to be a lot more than “$1.34 … for a few minutes” on public cloud infrastructure.

                        1. 1

                          I get the spirit of those responses; we’re getting to the point with cloud services were that kind of integration test suite could happen cheaply.

                          But it is not the answer to the problems that prompted those responses.

                          Dependency management is hard, and there isn’t a perfect solution, mvs is a cool approach and I’m curious how it shakes out in practice, but to OP’s point, I’m not sure I can do another switch like we’ve done up to now

                          Manual Vendoring ($GOPATH munging)
                          govendor
                          dep
                          vgo
                          whatever fixes the problems with vgo

                          1. 3

                            Agreed. I have a couple of projects that I have switched solutions at least 4 or 5 times already (manual GOPATH munging, godep, gpm, gb, dep), because each time it was either a somewhat commonly accepted solution, or seemed the least worst alternative (before there was any kind of community consensus).

                        2. 3

                          I have yet to migrate a project between dependency managers.

                          The old ones work exactly as well as they always have.

                          1. 2

                            I’ve reverted to using govendor for all new projects. I might be able to skip dep if vgo proves to be a good solution.

                            1. 1

                              similar story for us; govendor works better with private repos

                        1. 3

                          It seems like these “X implemented in Y” and “Z in N bytes” types of posts/articles have become increasing common the past few years. These things can be great learning experiences I suppose, and the “wow” factor can vary. But it makes me wonder if it’s a symptom of something larger, like are people not doing (as much) original stuff now?

                          1. 5

                            Also that we take bloat for granted in the tools we use.

                            1. 4

                              original stuff now?

                              It’s interesting. There was certainly a lot of low hanging fruit to discover and invent in the early days of computing. That’s not to say it was easy, but as that low hanging fruit has fallen, the difficulty level has increased to creating something truly original and new. You can peel existing fruit in a new way (which so many people do), and you can combine multiple pieces in new ways, but it’s pretty hard to go beyond the lower branches, and most people don’t have the time, patience, or need to do so.

                              Keep in mind that PhD programs last years, and yet, very rarely do they surface with research that has a lasting effect on industry, or, it’s so cutting edge that it’s not practical for many years later.

                              So what’s an average person to do for fun? Well, you can reinvent stuff you already know! Maybe you rewrite it smaller, faster, stripped of “bloat,” or with new features. Maybe you apply the ideas of an existing tool like, say, AWK, to a new data type, say, JSON, and invent jq. Maybe, instead, you make JSON work with the tools you already know by writing an adapter to make it more greppable. All these things are just new peels, or new cuts of the low hanging fruit though.

                              And maybe, just maybe, you’re just up for a challenge that sounds ridiculous. “Is it even possible?” The average Docker image, even with the nesting (there’s a better term which I can’t think of), is still large. It’s an absurd, but challenging idea, to fit an entire docker image in a tweet (which hasn’t been done, mind you. But holy crap, how cool would that be?). But, if you manage to do it, and even if not, you’ve probably learned a new way to peel existing fruit, and that might be applied to a the construction of a reach extender that gets you closer to the top of the tree.

                              1. 2

                                I’d also push back on the idea that such articles aren’t original/novel/innovative. There’s two distinct kinds of novelty here: getting something working that does something new, and packaging things up better so they can be used by everyone rather than just insiders. Both are equally important, and the latter is what it took to turn Roman-era cataphracts into medieval knights.

                                See also Carlota Perez’s notions of installation and deployment phases for a new technology.

                                1. 1

                                  I’m not sure if you’re pushing back on me, or the OP that I responded to. I’m completely in favor of these types of experiments as I believe them to be fun, interesting, and useful in the generation of new places to explore.

                                  We may have different ideas of what “new” is though, but that’s OK. :)

                                  1. 3

                                    Yes, the ‘also’ was intended to convey that I thought we were mostly in agreement. What words we choose for the categories is less important.

                                    I responded to you in hopes that both you and @markt would see my comment.

                                    1. 3

                                      Thanks for the clarification. I thought that might be the case, but whether it’s exhaustion, or something else, I couldn’t tell for sure.

                                      1. 1

                                        For my part, I’m often too terse when tapping out comments on my phone.

                              2. 2

                                Bit of a counterpoint: I find the “Z in N bytes” stuff to be pretty informative about learning about Z

                                Loads of these tiny docker experiments have really helped to clarify (at least for me) what Docker is and what it isn’t (not a VM). Some valuable stuff to be learned in there I think.

                                X implemented in Y is a bit less of this, but it can also be helpful if someone understands Y more than X.

                              1. 2

                                I would say JS already won. I prefer Python & Go. Still think Django or Rails are miles ahead in terms of productivity. However the vast majority of new projects are choosing for Node. We actually use Node for all our example/marketing projects since it’s just so much more popular than other languages. One interesting development is that Node adopted most of Python’s features over the past years, it really improved as a language. I still don’t like the async callback approach to handling concurrency, but other than that its a pretty decent language.

                                1. 4

                                  I don’t know, depending on your company, country and position you may feel this differently. In my few working experiences, people would run away from Javascript. Backend people moving to golang majoritarily and frontend people going from javascript and moving to Typescript, Elm or Reason. I think it totally depends of where, who and what you’re working on…

                                  1. 2

                                    Do people use Go for enteprise-y CRUD apps? I see a lot of Go for services and things that require eating through a bunch of data but I don’t hear about its use in other domains

                                    1. 2

                                      We use golang in this capacity. Backend services that don’t need tons of front end tooling are really nice to write in golang.

                                      1. 1

                                        Have you been basically hand-rolling most of the functionality (thinking in particular about ORMs and outputting HTML for the client)

                                        To be honest when reading Go code it tends to look very “nice C”-y, but that feels like it might lead to frustration when dealing with a bunch of strings to concatenate.

                                        1. 1

                                          haha, why yes we have: https://github.com/blend/go-sdk

                                          the golang stdlib gets you most of the way there, that sdk is really just a web helper, a logging / eventing helper, and a lite orm with a bunch of other random stuff thrown in for services that needed it

                                      2. 1

                                        Yes we did. As API serving JSON but also serving templates HTML. It’s quite nice but to be honest we didn’t grow it too big so we didn’t have much trouble maintaining it.

                                  1. 7

                                    It totally depends how you approach it.

                                    K8S on GKE is a breeze.

                                    K8S on aws w/ kops is a nightmare, but doable.

                                    We ended up taking most the features of kube away from developers and picked sane defaults for people and called it a day. You don’t need every spanner in the toolbox, but the guts of K8S (the scheduler, kubectl etc.) are great if you can separate the wheat from the chaff.

                                    1. 10

                                      I know this is not a super helpful comment but I semi-believe it: I think k8s at this point is likely just a funnel to GCP/GKE.

                                      1. 2

                                        Except that Red Hat and IBM are investing heavily in Kubernetes, and they don’t get anything from Google. GKE is Debian.

                                        1. 1

                                          But they are the go-tos for “we can’t / won’t use Google / cloud” so there’s room on the gravy train for them, too.

                                          1. 1

                                            I’m not sure how that conflicts with my claim above. If your competitor has a successful funnel, a reasonable strategy is to piggy-back on it. It doesn’t mean my claim is true, just that your statement doesn’t counter it at all.

                                      1. 3

                                        For you

                                        What did he mean by this?

                                        1. 8

                                          It’s purely cosmetic. the code is left unchanged.

                                          1. 3

                                            It’s not very readable for someone who isn’t used to those symbols.

                                            1. 2

                                              I meant that this probably makes code harder to read for anyone standing behind you.

                                            1. 8

                                              I interviewed for an SRE position at Google. I wasn’t real excited about the job (it would require me to move and I have small children and we live near family here), but I figured I should give it a shot.

                                              It really soured me on the whole thing. It was an all-day affair of “gotcha questions”. Not talking about my previous experience, what I was interested in, what I could do…no, it was “without looking, what’s the 4th bit of the TCP flags?” or “umount fails, why? give an explanation It still fails, why? give an explanation” until we got to “/bin is a separate partition and mounted on the device your trying to unmount, and so opening the umount binary prevents the umount”.

                                              My favorite was “how do you measure the latency between two systems?” “Ping.” “What if ping isn’t enough?” “Well, you could do a lot of pings and average the results.” “What if that’s not enough?” until we got to “instrumenting the network stack to remove syscall delays in ping,” with the interviewer saying after all that “‘ping’ was the answer we were looking for.”

                                              It was hours of that and it was seemingly all scripted. I was just a cog to be placed in a certain spot; there was no question about my past, the projects I’d worked on, what I was looking for in my career, or anything. Just a bunch of scripted questions.

                                              There was a phone interview prior to this of similar character, and some more interviews after, but ultimately the money wasn’t good enough to justify moving away from our extended family, and honestly I wasn’t really excited about it after that interview process anyway.

                                              1. 2

                                                I’ve been a pretty vanilla Java developer for a while, but I’ve been on an “infrastructure” streak at work. I got tired of bemoaning that it wasn’t a priority to upgrade things, automate things, etc and hence have been doing more devOps/SRE like activities. I’m still not strong on most basic sysadmin things, but have been wondering what else I can do to build up those sorts of skills. How did you start building up that knowledge? There are sysadmins at work that are pretty good, but I don’t even know what to start asking since those things are usually out of my scope.

                                                1. 1

                                                  I’ve always been in a weird place where I wear a lot of hats. Back in the late 90’s, I was a sysadmin/network admin who enjoyed coding and research for fun. That slowly transitioned to doing coding and research professionally, starting with automating and monitoring infrastructure (back when there was no middle ground: you either ran Big Brother with a bunch of home-made scripts or you shelled out $90k for HP OpenView). Around 2005 or 2006 it flipped completely, and now I only do research and coding and any infrastructure work is solely on test systems, etc.

                                                2. 2

                                                  The only thing I like about Google is the interviews. I’ve considered applying periodically just to do them again for fun.

                                                  Actually working there was much less fun

                                                  1. 2

                                                    I interviewed for the SRE team and it was standard coding questions like I would give at work. Data-point of one, obviously. They had Robert Griesemer interview me which was fucking intimidating. The rest was fine.

                                                  1. 5

                                                    They claim that the gopher is still there but I didn’t see it anywhere…

                                                    https://mobile.twitter.com/golang/status/989622490719838210

                                                    Rest easy, our beloved Gopher Mascot remains at the center of our brand.”

                                                    and why on earth is this downvoted off topic?

                                                    1. 4

                                                      https://twitter.com/rob_pike/status/989930843433979904

                                                      Rob Pike seconding this.

                                                      Also, this is pretty relevant because when people think “golang logo” they typically think of the gopher. I’m not sure people even realized there was a hand drawn golang text logo before this announcement.

                                                      1. 3

                                                        It had two speed lines. Go got faster, so they added a 3rd. Presumably, there’s room for more speed lines as Go’s speed improves.

                                                    1. 2

                                                      After writing golang for 3+ years now, seeing one-liners for list operations makes me super wary. What if i need to inspect things as the loop is iterating? How is it iterating? Is it doing an allocation for an array iterator? Am I re-using memory or is it creating a new object for each thing in the array?

                                                      Maybe I just have stockholm syndrome from golang but there is something to be said for writing a little extra code (albeit, a lot of extra code over time) for the sake of clarity and maintainability.

                                                      1. 2

                                                        What if i need to inspect things as the loop is iterating?

                                                        I’m not sure exactly what you mean by this but if you need to do something map et al don’t do, then don’t use them. The value of them is that if you do use them, a reader knows what you can’t do, and know what you can’t do is very powerful.

                                                        How is it iterating?

                                                        The idea is you shouldn’t care, just that it does. In general, map et al are meant to be implemented efficiently because they should be used by everyone.

                                                        Is it doing an allocation for an array iterator? Am I re-using memory or is it creating a new object for each thing in the array?

                                                        In language with a functional pedigree, no because the values are immutable. In a language like Go, this would have to be specified but the type should tell you that, right? If it’s a pointer it’s to the original value.

                                                        Maybe I just have stockholm syndrome from golang but there is something to be said for writing a little extra code (albeit, a lot of extra code over time) for the sake of clarity and maintainability.

                                                        The counter is that all those for loops are less clear and less maintainable because you can do anything in them. As I said above, the value of map and friends is what you cannot do.

                                                        1. 1

                                                          In language with a functional pedigree, no because the values are immutable. In a language like Go, this would have to be specified but the type should tell you that, right? If it’s a pointer it’s to the original value.

                                                          Golang range re-assigns iteration variables.

                                                          1. 2

                                                            So those are just semantics you have to know about your language and libraries. Being a one-liner doesn’t change that.

                                                        2. 1

                                                          What if i need to inspect things as the loop is iterating?

                                                          Do golang debuggers not let you set breakpoints inside closures?

                                                        1. 1

                                                          Generally agree with all the points. One caveat; some ecosystems are set up for cohesive dependency management better than others. NPM is kind of a shit show. Golang is great provided you’re willing to invest in some wrappers for things in the stdlib. Versioning these things is a nightmare regardless what ecosystem you’re writing your library set for, and it’s important to be religious about either maintaining backwards compatibility or letting users specify major / minor versions they track.

                                                          1. 4

                                                            It’s interesting history is kind of repeating itself with Sony starting to grab market share from Canon, starting with high end full frame prosumer A7* series and eventually trying to sneak pro’s away with the A9.

                                                            Mirrorless as a thing is still something I’m skeptical on, mostly because of battery life, but you have to hand it to Sony that they’re making progress and the results are pretty impressive.

                                                            1. 6

                                                              It’s interesting history is kind of repeating itself

                                                              It goes even deeper. Canon is pulling a Nikon now, by releasing near-insulting refreshes of their top-tier cameras the 5Dmk4 and 6Dmk2, both worse than older (!) Nikon equipment, whereas the recent Nikon releases have been received very well. One might suspect Canon themselves have given up on DSLRs. They seem to be stuck in eternally rereleasing the same 24 megapixel sensor all the time.

                                                              I think in the long-term mirrorless is inevitable and it looks like Canon has finally gotten its shit together to produce EOS M cameras which are starting to get competitive with their EOS bodies. Nikon is also expected to release some mirrorless camera this year. I’m sure the first models will be terrible to begin with but in a few years I can definitely see me switching from a D750 to a Nikon mirrorless. Or Sony mirrorless.

                                                              1. 2

                                                                It’s kind of sad though that newer Nikon pro gear (everything on the NPS list) is built to lower and lower standards, with production offshored to China or Thailand, while Canon pro equipment is built to better and better standards in Japan, and it’s cheaper than Nikon!

                                                                I much prefer the Nikon ergonomics and the features of Nikon cameras, but the lenses produced today, while of great optical quality, feel cheap and awful. Canon lenses on the other hand are made of metal (the good ones), and feel like a tank.

                                                                1. 1

                                                                  There is some truth to the quality issues and I think everyone, even the most ardent fans, have to agree. The cameras are still what they have always been. But the core lenses continue to increase the plasticy feel which I think just bothers a lot of people. Canon’s core pro L lenses feel very much like Nikon’s Ai-S and first generation AF-D lenses when there was still an aperture ring. I am hardly one to beat up gear but my 70-200 has stopped working twice which really bugs me.

                                                                2. 1

                                                                  I don’t feel like Canon is in a rush to move to mirrorless (IMHO, the benefits are minimal for protogs).

                                                                  What they are getting beaten on is sensor quality; no BSI in 2018 is a sign they’re not investing in their in house sensors enough.

                                                                  All it would really take is bringing the sensors up to speed, and adding some better 4k video handling in the 5d series (you can now at least get c-log output) and they’d be competitive again.

                                                                3. 1

                                                                  I’ve switched to mirrorless. Thing almost fits in my jacket pocket - if I saved up for a non kit lens it would, actually, fit in my jacket pocket. I’m a casual shooter. Batteries are not an issue - I carry two spares with me, just like one would carry film in the old days. The auto focus is on par with my consumer level Nikon DSLR, the low light performance is phenomenal. For casual shooters, can’t think of a reason the SLR format should survive.

                                                                  1. 6

                                                                    I mostly shoot slide film. I have a Nikon F4, a Nikon FM3a, and a Nikon FA, and a bunch of old, manual focus AI-s lenses. However, I want to shoot digitally too, so a bought a Fuji X-T10. I have been using this camera for about two years now, and have taken many great pictures with it, but I hate it so much, so much. I can’t wait to get rid of it and buy a Nikon DSLR.

                                                                    Let’s start with the good stuff. The good Fuji cameras and lenses are built to the highest mechanical standard. I wish new Nikon lenses were this good.

                                                                    That’s all the good stuff I can think of, now the bad stuff:

                                                                    The camera is small, but the lenses are just as big as modern DSLR lenses. This means the camera is too small for proper hand-holding technique. When assembling a system, the total weight is little bit lower than a DSLR kit, but the bulk is not significantly smaller at all, and I am constrained by bulk, rather than weight.

                                                                    Focus by wire works poorly. In fact I would say it’s impossible to use. Never again. But even if where were lenses which didn’t focus by wire, you still could not manually focus because the resolution of the EVF is too low for critical manual focus. On a tripod you can zoom to 100%, but handheld, no way. On the other hand I use a split prism focusing screen on my SLRs, so this is never a problem.

                                                                    The ergonomics of the camera are bad. I can’t use it with gloves. I can use a (D)SLR with gloves.

                                                                    The software on the camera is terrifyingly bad.

                                                                    The flash system is weak.

                                                                    In low light, or for sports, autofocus is useless. There are some mirrorless cameras out there that do better AF than even DSLR, but only the top-of-line stuff.

                                                                    The colors I get from this cameras are not great. This is not a problem with the camera, but with the color profiles used by desktop software. However, it is what it is. I can’t really do anything about it. You can make custom profiles, but it’s much harder than most people realize, and if you do it you’ll get a metrologically correct profile, which is not what I want. Nikon and Canon profiles are non-flat in a way that I like, and I can’t really emulate that.

                                                                    Speaking of color profiles, Nikon allows you to load custom profiles in-camera. This is huge, because even if I shoot RAW, I need to make decision in the field based on the JPG preview, so it’d better be what profile I’m going to use anyway.

                                                                    All lenses use different filter thread sizes. This drives me nuts.

                                                                    Battery life is poor, and extremely poor in cold weather. A pro DSLR can take Lithium primary AA batteries that work at -40C.

                                                                    Again, not a problem with the camera, but with Adobe software, but Adobe does a very poor job on Fuji raw files. I use Iridient Developer to convert Fuji raw files in DNG, but that makes the workflow slower and uses twice the amount of space (assuming I want to keep the originals, which I do).

                                                                    Oh yeah, camera takes too long to boot.

                                                                    I would like a mirrorless camera, but it would have to work differently than they work now.

                                                                    Personally, I want a APS-C/FF camera (micro 4/3 is too small) that has small lenses. I want a 16-35/f8 and a 70-300/f8-f11 (35mm equivalent). When doing landscapes, I shoot at those small apertures anyway, so i’d like small lenses. You couldn’t make such slow lenses for DSLRs, because they would be too dark in the viewfinder, but with mirrorless you could. With small lenses, the camera can be small too, as it won’t feel unbalanced. The lenses must of course use the same filter thread size, and it should be possible to operate the camera with gloves. The camera should boot instantly. The camera should close the shutter when changing lenses (why don’t mirrorless cameras do this??).

                                                                    If you can’t make small zooms, I’d be happy with small primes. 20mm, 85mm, and 200mm are all I need. If you kake them f/4 you should be able to make them, really, really, really small.

                                                                    Oh yeah, and I’d like some tilt-shift lenses too.

                                                                    1. 1

                                                                      If you haven’t got one already, I can recommend a Nikon D700 as the almost perfect “digital FE2” camera.

                                                                    2. 2

                                                                      In our household we solved the problem of size and weight by me carrying all photo equipment and playing assistant to my wife, who has the talent and skills. Not for everyone but I am happy with results :)

                                                                      1. 1

                                                                        I’ve switched mostly to an X100 for the past few years. However the DSLR is still ‘needed’ for two things - kids sport and product/portrait shots for my wife’s seamstress business. The second could be mitigated by switching to a interchangeable lens mirrorless, but then I lose a lot of what I love about the X100.

                                                                    1. 1

                                                                      Wouldn’t the performance of HashMap Lookup be O(1)*, not O(log n)? It states the key value must implement hash and eq, ideally this means you eventually end up with a hash table, which would mean faster lookups.

                                                                      1. 3

                                                                        It’s implemented underneath with an HAMT (hash array mapped trie) iirc. So although it compares on hashes it’s an ordered map over those hashes w/ O(log n) for most ops.

                                                                      1. 2

                                                                        These are probably the weakest arguments against Bitcoin I’ve seen. But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                                        Real arguments against Bitcoin are:

                                                                        And I’m sure there are others but literally none of the ones presented here are valid.

                                                                        1. 29

                                                                          These are probably the weakest arguments against Bitcoin I’ve seen.

                                                                          As it says, this is in response to one of the weakest arguments for Bitcoin I’ve seen. But one that keeps coming up.

                                                                          But the coolest bit about Bitcoin is that it is completely voluntary, so you do your thing, and we’ll do ours.

                                                                          When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business.

                                                                          1. 19

                                                                            I would also like to be able to upgrade my gaming PC’s GPU without spending what the entire machine cost.

                                                                            This is getting better though.

                                                                            1. 1

                                                                              For what it’s worth, Bitcoin mining doesn’t use GPUs and hasn’t for several years. GPUs are being used to mine Ethereum, Monero, etc. but not BItcoin or Bitcoin Cash.

                                                                            2. 0

                                                                              When you’re using literally more electricity than entire countries, that’s a significant externality that is in fact everyone else’s business

                                                                              And yet, still less electricity than… Christmas lights in the US or gold mining.

                                                                              https://coinaccess.com/blog/bitcoin-power-consumption-put-into-perspective/

                                                                              1. 21

                                                                                When you reach for “Tu quoque” as your response to a criticism, then you’ve definitely run out of decent arguments.

                                                                            3. 13

                                                                              Bitcoin (and all blockchain based technology) is doomed to die as the price of energy goes up.

                                                                              It also accelerates the exaustion of many energy sources, pushing energy prices up faster for every other use.

                                                                              All blockchain based cryptocurrencies are scams, both as currencies and as long term investments.
                                                                              They are distributed, energy wasting, ponzi scheme.

                                                                              1. 2

                                                                                wouldn’t an increase in the cost of energy just make mining difficulty go down? then the network would just use less energy?

                                                                                1. 2

                                                                                  No, because if you reduce the mining difficulty, you decrease the chain safety.

                                                                                  Indeed the fact that the energy cost is higher than the average bitcoin revenue does not means that a well determined pool can’t pay for the difference by double spending.

                                                                                  1. 3

                                                                                    If energy cost doubles, a mix of two things will happen, as they do when the block reward halves:

                                                                                    1. Value goes up, as marginal supply decreases.
                                                                                    2. If the demand isn’t there, instead the difficulty falls as miners withdraw from the market.

                                                                                    Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value. This cost is what secures the blockchain by making attacks costly.

                                                                                    1. 1

                                                                                      Either way, the mining will happen at a price point where the mining cost (energy+capital) meets the block reward value.

                                                                                      You forgot one word: average.

                                                                                      1. 2

                                                                                        It is implied. The sentence makes no sense without it.

                                                                                        1. 1

                                                                                          And don’t you see the huge security issue?

                                                                                2. 1

                                                                                  Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                                  PoS has no such energy requirements. Peercoin (2012) was one of the first, Blackcoin, Decred, and many more serve as examples. Ethereum, #2 in “market cap”, is moving to PoS.

                                                                                  So to say “ [all blockchain based technology] is doomed to die as the price of energy goes up” is silly.

                                                                                  1. 1

                                                                                    Much of the brains in the cryptocurrency scene appear to be in consensus that PoW is fundamentally flawed and this has been the case for years.

                                                                                    Hum… are you saying that Bitcoin miners have no brain? :-D

                                                                                    I know that PoS, in theory, is more efficient.
                                                                                    The fun fact is that all implementation I’ve seen in the past were based on PoW based crypto currencies stakes. Is that changed?

                                                                                    As for Ethereum, I will be happy to see how they implement the PoS… when they will.

                                                                                    1. 2

                                                                                      Blackcoin had a tiny PoW bootstrap phase, maybe weeks worth and only a handful of computers. Since then, for years, it has been purely PoS. Ethereum’s goal is to follow Blackcoin’s example, an ICO, then PoW, and finally a PoS phase.

                                                                                      The single problem PoW once reasonably solved better than PoS was egalitarian issuance. With miner consolidation this is far from being the case.

                                                                                      IMHO, fair issuance is the single biggest problem facing cryptocurrency. It is the unsolved problem at large. Solving this issue would immediately change the entire industry.

                                                                                      1. 1

                                                                                        Well, proof of stake assumes that people care about the system.

                                                                                        It see the cryptocurrency in isolation.

                                                                                        An economist would object that a stake holder might get a lot by breaking the currency itself despite the loss in-currency.

                                                                                        There are many ways to gain value from a failure: eg buying surrogate goods for cheap and selling them after the competitor’s failure has increased their relative value.

                                                                                        Or by predicting the failure and then causing it, and selling consulting and books.

                                                                                        Or a stake holder might have a political reason to demage the people with a stake in the currency.

                                                                                        I’m afraid that the proof of stake is a naive solution to a misunderstood economical problem. But I’m not sure: I will surely give a look to Ethereum when it will be PoS based.

                                                                                  2. 0

                                                                                    doomed to die as the price of energy goes up.

                                                                                    Even the ones based on proof-of-share consensus mechanisms? How does that relate?

                                                                                    1. 3

                                                                                      Can you point to a working implementation so that I can give a look?

                                                                                      Last time I checked, the proof-of-share did not even worked as a proof-of-concept… but I’m happy to be corrected.

                                                                                      1. 2

                                                                                        Blackcoin is Proof of Stake. (I’ve not heard of “Proof of Share”).

                                                                                        Google returns 617,000 results for “pure pos coin”.

                                                                                        1. 1

                                                                                          Instructions to get on the Casper Testnet (in alpha) are here: https://hackmd.io/s/Hk6UiFU7z# . No need to bold your words to emphasize your beliefs.

                                                                                          1. 3

                                                                                            The emphasis was on the key requirement.

                                                                                            I’ve seen so many cryptocurrencies died few days after ICO, that I raised the bar to take a new one seriously: if it doesn’t have a stable user base exchanging real goods with it, it’s just another waste of time.

                                                                                            Also, note that I’m not against alternative coins. I’d really like to see a working and well designed alt coin.
                                                                                            And I like related experiments as GNU Teller.

                                                                                            I’m just against scams and people trying to fool other people.
                                                                                            For example, Casper Testnet is a PoS based on a PoW (as Etherum currently is).

                                                                                            So, let’s try again: do you have a working implementation of a proof of stake to suggest?

                                                                                            1. 1

                                                                                              It’s not live or open-source, so I’d understand if you’re still skeptical, but Algorand has simulated 500,000 users.

                                                                                              1. 1

                                                                                                Again I don’t seem to understand your anger. We’re on a tech site discussing tech issues. You seem to be getting emotional about something that’s orthogonal to this discussion. I don’t think that emotional exhorting is particularly conducive to discussion, especially for an informed audience.

                                                                                                And I don’t understand what you mean by working implementation. It seems like a testnet does not suffice. If your requirements are: widely popular, commonly traded coin with PoS, then congratulations you have built a set of requirements that are right now impossible to satisfy. If this is your requirement then you’re just invoking the trick question fallacy.

                                                                                                Nano is a fairly prominent example of Delegated Proof of Stake and follows a fundamentally very different model than Bitcoin with its UTXOs.

                                                                                                1. 3

                                                                                                  No anger, just a bit of irony. :-)

                                                                                                  By working implementation of a software currency I mean not just code and a few beta tester but a stable userbase that use the currency for real world trades.

                                                                                                  Actually that probably the minimal definition of “working implementation” for any currency, not just software ones.

                                                                                                  I could become a little lengthy about vaporware, marketing and scams, if I have to explain why an unused software is broken by definition.
                                                                                                  I develop an OS myself tha literally nobody use, and I would never sell it as a working implementation of anything.

                                                                                                  I will look to Nano and delegated proofs of stake (and I welcome any direct link to papers and code… really).

                                                                                                  But frankly, the sarcasm is due to a little disgust I feel for proponents of PoW/blockchain cryptocurrencies (to date, the only real ones I know working, despite broken as actual long term currency): I can understand non programmers that sell what they buy from programmers, but any competent programmer should just say “guys Bitcoin was an experiment, but it’s pretty evident that has been turned to a big ponzi scheme. Keep out of cryptocurrencies! Or you are going to loose your real money for nothing.”

                                                                                                  To me, programmers who don’t explain this are either incompetent enough to talk about something they do not understand, or are trying to profit from those other people, selling them their token (directly or indirectly).

                                                                                                  This does not means in any way that I don’t think a software currency can be built and work.

                                                                                                  But as an hacker, my ethics prevent me from using people’s ignorance against them, as does who sell them “the blockchain revolution”.

                                                                                              2. 2

                                                                                                The problem is that in the blockchain space, hypotheticals are pretty much worthless.

                                                                                                Casper I do respect, they’re putting a lot of work in! But, as I note literally in this article, they’re discovering yet more problems all the time. (The latest: the security flaws.)

                                                                                                PoS has been implemented in a ton of tiny altcoins nobody much cares about. Ethereum is a great big coin with hundreds of millions of dollars swilling around in it - this is a different enough use case that I think it needs to be regarded as a completely different thing.

                                                                                                The Ethereum PoS FAQ is a string of things they’ve tried that haven’t quite been good enough for this huge use case. I’ll continue to say that I’ll call it definitely achievable when it’s definitely achieved.

                                                                                        2. 4

                                                                                          ASICboost was fixed by segwit. Bitcoin isn’t subject to ASICboost anymore, but Bitcoin Cash is.

                                                                                          1. 2

                                                                                            Covert asicboost was fixed with segwit, overt is being used: https://mobile.twitter.com/slush_pool/status/977499667985518592

                                                                                        1. 2

                                                                                          It’s a shame it’s open-core.

                                                                                          1. 7

                                                                                            Spanner/F1 and FoundationDB were closed. CochroachDB was first (AFAIK) of those competing with Spanner to give us anything at all. Let’s give them credit, eh? ;)

                                                                                            1. 3

                                                                                              FoundationDB was open (in some form) before it disappeared into Apple.

                                                                                              1. 4

                                                                                                I don’t believe any of the interesting tech was open source. It was sort of the opposite of open-core, with a proprietary core but some ancillary stuff like an SQL parser that was open source. That other stuff is what disappeared when Apple bought them (GitHub deleted, packages pulled from repos, etc.), which caused a bit of a stir as they disappeared with no warning and some people had been depending on the packages.

                                                                                                1. 2

                                                                                                  I never heard that. Ill look into it further. Thanks.

                                                                                                  1. 1

                                                                                                    Looking into it, the core DB that was what was really valuable was closed with some peripheral stuff open. This write-up goes further to say it was kind of fake FOSS that lured people in. I don’t have any more data to go on since Apple pulled all the Github repos.

                                                                                                  2. 1

                                                                                                    It doesn’t seem to me that CockroachDB competes with Spanner. I’d thought of MongoDB before CockroachDB.

                                                                                                    1. 8

                                                                                                      It’s explicitly the origin for CockroachDB. Spanner less the cesium clocks.

                                                                                                1. 24

                                                                                                  MISRA (the automotive applications standard) specifically requires single-exit point functions. While refactoring some code to satisfy this requirement, I found a couple of bugs related to releasing resources before returning in some rarely taken code paths. With a single return point, we moved the resource release to just before the return. https://spin.atomicobject.com/2011/07/26/in-defence-of-misra/ provides another counterpoint though it wasn’t convincing when I read it the first time.

                                                                                                  1. 8

                                                                                                    This is probably more relevant for non-GC languages. Otherwise, using labels and goto would work even better!

                                                                                                    1. 2

                                                                                                      Maybe even for assembly, where before returning you must manually ensure stack pointer is in right place and registers are restored. In this case, there’s more chances to introduce bugs if there are multiple returns (and it might be harder for disassembly when debugging embedded code).

                                                                                                      1. 1

                                                                                                        In some sense this is really just playing games with semantics. You still have multiple points of return in your function… just not multiple literal RET instructions. Semantically the upshot is that you have multiple points of return but also a convention for a user-defined function postamble. Which makes sense, of course.

                                                                                                      2. 2

                                                                                                        Sure, but we do still see labels and gotos working quite well under certain circumstances. :)

                                                                                                        For me, I like single-exit-point functions because they’re a bit easier to instrument for debugging, and because I’ve had many time where missing a return caused some other code to execute that wasn’t expected–with this style, you’re already in a tracing mindset.

                                                                                                        Maybe the biggest complaint I have is that if you properly factor these then you tend towards a bunch of nested functions checking conditions.

                                                                                                        1. 2

                                                                                                          Remember the big picture when focusing on a small, specific issue. The use of labels and goto might help for this problem. It also might throw off automated, analysis tools looking for other problems. These mismatches between what humans and machines understand is why I wanted real, analyzable macros for systems languages. I had one for error handling a long time ago that looked clean in code but generated the tedious, boring form that machines handle well.

                                                                                                          I’m sure there’s more to be gleaned using that method. Even the formal methodists are trying it now with “natural” theorem provers that hide the mechanical stuff a bit.

                                                                                                          1. 2

                                                                                                            Yes, definitely – I think in general if we were able to create abstractions from within the language directly to denote these specific patterns (in that case, early exits), we gain on all levels: clarity, efficiency and the ability to update the tools to support it. Macros and meta-programming are definitely much better options – or maybe something like an ability to easily script compiler passes and include the scripts as part of the build process, which would push the idea of meta-programming one step further.

                                                                                                        2. 5

                                                                                                          I have mixed feelings about this. I think in an embedded environment it makes sense because cleaning up resources is so important. But the example presented in that article is awful. The “simpler” example isn’t actually simpler (and it’s actually different).

                                                                                                          Overall, I’ve only ever found that forcing a single return in a function often makes the code harder to read. You end up setting and checking state all of the time. Those who say (and I don’t think you’re doing this here) that you should use a single return because MISRA C does it seem to ignore the fact that there are specific restrictions in the world MISRA is targetting.

                                                                                                          1. 4

                                                                                                            Golang gets around this with defer though that can incur some overhead.

                                                                                                            1. 8

                                                                                                              C++, Rust, etc. have destructors, which do the work for you automatically (the destructor/drop gets called when a value goes out of scope).

                                                                                                              1. 1

                                                                                                                Destructors tie you to using objects, instead of just calling a function. It also makes cleanup implicit vs. defer which is more explicit.

                                                                                                                The golang authors could have implemented constructors and destructors but generally the philosophy is make the zero value useful, and don’t add to the runtime where you could just call a function.

                                                                                                              2. 4

                                                                                                                defer can be accidentally forgotten, while working around RAII / scoped resource usage in Rust or C++ is harder.

                                                                                                              3. 2

                                                                                                                Firstly he doesn’t address early return from error condition at all.

                                                                                                                And secondly his example of single return…

                                                                                                                singleRet(){
                                                                                                                    int rt=0;
                                                                                                                    if(a){
                                                                                                                        if(b && c){
                                                                                                                            rt=2;
                                                                                                                        }else{
                                                                                                                            rt=1;
                                                                                                                        }
                                                                                                                    }
                                                                                                                    return rt;
                                                                                                                }
                                                                                                                

                                                                                                                Should be simplified to…

                                                                                                                a ? (b && c ? 2 : 1) : 0
                                                                                                                
                                                                                                                1. 1

                                                                                                                  Are you sure that wasn’t a result of having closely examined the control flow while refsctoring, rather than a positive of the specific form you normalised the control flow into? Plausibly you might have spotted the same bugs if you’d been changing it all into any other specific control flow format which involved not-quite-local changes?