1. 1

    I’m not sure I’m sold on this. I get why people building infrastructure software like redis might want this. Yes, it helps them keep the “Foo as a Service” market as a captive income stream without competition from AWS, et al. At the same time it seems like for any service of much worth, it’s going to get cloned by the big providers anyway, and then you have a proliferation of similar but incompatible closed-source versions. I’m not convinced that is necessarily good for the community at large.

    1. 2

      I think it’s just a protection to avoid a Redis as a service launched with plain redis and few bits here and there to make the offering work. Big players can obviously clone it and have theirs, but at least most small to middle size players are eliminated. (From what I understand).

      1. 3

        You can still start Redis as a Service companies. I was shocked at first because I thought this concerned Redis and their aim was to kill all of the Redis as a Service providers which already exist. But it turns out Redis Core is unaffected by this, only some modules are.

        I don’t really know what they intend to achieve with this, except having people avoid using their modules…

        1. 2

          Which doesn’t seem worthwhile, as the big players are the ones most likely to be able to market and monetise a service based on core Redis plus their own proprietary add-ons. It’s pretty difficult to compete with AWS on any front at this stage, given their massive resources and the “nobody ever gets fired for buying X” safety of big brands.

          Boxing out only the small players doesn’t really feel like it’s going to preserve a whole bunch of market or mindshare for the Redis company.

          1. 1

            I’m not into business very much, so I cannot evaluate if this operation is worth it or not, I would just assume that they were going for the long tail, which can be a sufficient number of clients to have decent revenue and continue to work on the Redis company.

            1. 1

              In reality I don’t have the feeling that a “long tail” actually exists for a lot of these types of services. I base this on the Firebase/Parse era when there were loads of “backend as a service” companies around that have all withered away (my understanding at least). With only google/Firebase remaining. I personally was surprised by this.

      1. 8

        Imagining a time where I can go to the local maker space and print some open source garments that I have modified to have larger pockets.

        In the present, I just stick to what works and get a lot of stuff from the same brand, since I know it fits me and my essential items well.

        1. 5

          Programmable sewing machine would be quite amazing.

          1. 1

            http://softwearautomation.com/li-fung-announce-partnership-softwear-automation/

            Softwear’s revolutionary digital t-shirt SEWBOT® Workline is fully autonomous and requires a single operator, producing one complete t-shirt every 22 seconds…

            1. 1

              Found this as well: https://www.youtube.com/watch?v=qXFUqCijkUs Seems like clothes would have be ‘re-architected’ for this method.

              1. 1

                Why would they need to be rearchitected? The video doesn’t seem to point to that directly.

                1. 1

                  Not all clothes are assembled from fabric panels that stack neatly on top of each other, consider the crotch of the common pants, or double inner seam of jeans.

            2. 1

              Ha, there’s actually some work on this, I think there’s a DARPA project as well, robotic garment assembly. I’ve given this topic a lot of thought myself. Robots can wield, why not sew?

          1. 4

            If you are not willing to do that kind of analysis, asking your boss which other topic should be dropped from your tasks is kind of effective.

            1. 2

              Also, plainly asking this quesiton can really make them think; is it your intention for me/team to work late or weekends to accomplish this? And if push comes to shove; even after they worked late last week to make other deadline Y?

              A lot of people can’t conceptualize priorities unless deliberately prompted.

              1. 1

                A lot of businesses nowadays claim to be work agile, so in most places I worked at there is some implementation of Scrum. What I usually tend to set straight when I start in an org is to emphasize the pull-principle, i.e. prioritized backlog, people pull the items from the top. There is a certain magic to it. Devs get more relaxed and leave the office at 5, team leads – not without struggle – learn that not everything can be highest priority, that friction of pushing work items into people’s schedule is replaced by a certain flow.

                Unfortunately, this usually involves refuting a lot of strange interpretations of agile / scrum that (I think) often stem from the consultants hired by the orgs to introduce agile methodologies or give management trainings on it.

            1. 1

              Always good to read a different take on this subject. One question I have, I wonder if the book answers, is why write a compiler in go? Any advantages or disadvantages?

              I have two thoughts on this primarily. If you’re writing a compiler, why not use a language and environment suited for the job? For example the ML family of languages excel at concisely representing algorithms and data structures used for parsing, working with ASTs, etc. But you rarely see compilers written in ML in the wild.

              The other thing is; the difficulty of getting good static analysis (for code completion, compile errors, etc) for languages has always been a problem, that’s why there’s been so many different parsers for c++ over the years (Eclipse CDT, QT creators, etc, not to mention ctags et al). Language servers fill this gap. I suppose using go for a language server has appeal due to go’s performance, single executable deployment, gc, and network service friendly nature.

              1. 3

                Ha, good question! I’ve previously had to answer it, because, yes, you’re right, there are other languages that are more suited to compiler development. I personally think any language with pattern matching and union types is far more expressive when writing compilers than a language without those features.

                But I choose Go not for its expressiveness, but because I personally think that it’s a great teaching language: it doesn’t hide anything, there’s no magic behind any line of Go, you can read Go code even if you’ve never written Go in your life, the standard library contains enough so that we don’t have to get into modules/dependencies, it comes with a testing framework, the standard installation contains gofmt and go test, it’s super stable (you can still run the code from the first version of the first book, written at the beginning of 2016, as it is — that’s not necessarily true for other languages/projects), etc.

                That’s the gist of it. I have a few more thoughts on why Go is great for teaching, but I think I should turn those into the blogpost I intend to write :)

                1. 1

                  …it doesn’t hide anything, there’s no magic behind any line of Go…

                  Go is a fine and relatively simple language, but I would argue there’s plenty of magic in the M:N threading, GC, structural typing, and runtime reflection.

                  I certianly won’t argue against any of your other reasons, though.

                  1. 1

                    Ah, yes, you’re right. That can certainly be considered magic, but I was thinking more of the meta-programming magic I know from Ruby.

                2. 2

                  re ML. For others wondering, here’s a 1998 article on why ML/Ocaml are good for writing compilers. Ocaml is even better at it today thanks its libraries. There’s also metaprogramming extensions to Standard ML that give it some of LISP’s power. Then, there’s people like sklogic whose tool just uses a LISP with Standard ML embedded as a DSL. Drops out of SML whenever its easier to express an algorithm outside of it.

                  1. 2

                    Thank you for linking that! I love the point at the end about languages being toolboxes and some are more suited for certain tasks than others:

                    But all languages have some problem domains in which they shine. I think that compiler implementation is one of those areas for ML. You’re writing a compiler and in middle of a function you need a 9mm crescent wrench with a box on the other end. You open up your toolkit and … there it is, in the top drawer, bright and shiny and strong. You use it, and then a few minutes later you need a small phillips screwdriver with a clip and a magnet … and there it is, bright and shiny and strong.

                    It’s not that there are an extraordinary number of tools in the toolbox. (No; in fact, the toolbox is much smaller than the usual toolboxes, the ones used by your friends that contain everything but the sink.) It’s that the toolbox was carefully and very thoughtfully assembled by some very bright toolsmiths, distilling their many decades of experience, and designed, as all good toolkits are, for a very specific purpose: building fast, safe and solid programs that are oriented around separate compilation of functions that primarily do recursive manipulation of very complex data structures.

                1. 7

                  Often necessity to use deep copy (in any language) signals bad design: when you don’t know for sure that some code to which you pass that object will not mutate it and its deep sub-parts. That’s overly defensive programming. I rarely encounter even shallow copy in practice (almost always it’s right before mutating such objects) and almost never encounter need for deep copy.

                  1. 4

                    I’m not an FP zealot, but I work with Elixir these days, and “deep copy” is an alien concept there. If you add an item to a list, you get a new copy of the list with the new item added. It’s impossible to affect any other bit of code that had the old list.

                    It’s nice not to have to think about that.

                    1. 3

                      I agree it’s bad practice. My moment of enlightenment was realizing the problem is isomorphic to serializing/deserializing an object graph. Example if you have machinery to completely serialize the state of an object (or an object graph), just use that for a “deep copy” ie deserialize multiple times.

                      1. 1

                        I don’t agree that is bad design. Let’s say that you have a method that updates an Entity and at the end, you emit an event with the new Entity and the old one. One solution is to fetch the entity from database clone it to a $previousEntity variable and use the $entity to perform the replaces you need and then emit the event. However does not mean you should have mutable VOs but from my experience at every place I have been there are always exceptions for certain reason and it can be very easy to start having bugs and very difficult to understand where they are coming from ;)

                      1. 2

                        Interesting, the parent project ‘java grinder’ reminds me of xmlvm[1] a project that translated bytecode from java to clr to c to c++ objc, etc… Anyone remember this? I always wondered about the state of this project.

                        [1] http://www.xmlvm.org/frontend/

                        1. 4

                          My sense from using Scala professionally for the past approximate five years has been that most of the interests in building out tooling, infrastructure, etc. in that community are predominantly business-driven. That is, the Scala community tends to produce accelerators as a side effect of business need, not ars gratia artis, as many other communities do: Haskell, Rust, Go, etc. Or, somebody produces something once and doesn’t update it and doesn’t have the resources or skills to build a community around a much used project. This is not a condemnation of maintainers who don’t maintain, but rather a failure of the Scala community to produce a group of curators — people who will take on the load when maintainers abdicate —as well formed and intentional as many other communities have, or a culture of curation. “Disarray” is too strong of a word, but as someone who moves between four languages daily, I can compare communities and their resources pretty well and conclude that my technical decision to use Scala is sound while the political decision frustrates me at times.

                          1. 4

                            I think it couldn’t be farther from the truth.

                            The main source of the embarrassing cycle of “Hype – Failed Promises – Abandonment – The Shiny New Thing – Hype – …” is in fact coming from the academic side.

                            To be clear: I’m not blaming students that they drop their work and completely disappear the minute they handed in their thesis, and let other people deal with the consequences. That’s just the way it is.

                            The problem is the ease with which they can get things added to the language/library, especially compared to the scrutiny outside contributions regularly receive.

                            This pattern has repeated over and over, and I think it’s one of the unchangeable parts of the language/community.

                            If you care about a quality, documentation, tooling, then Scala isn’t the right language for you. Simply because “we managed to ship a new version of Scala without breaking every IDE” is not a topic you can write a paper of.

                            1. 1

                              This is a very important point and I’m glad that you wrote it out.

                              Would you say that the Scala ecosystem has a continuity problem because businesses and academia are focused primarily on the now and not necessarily building the road ahead for the community and then maintaining those roads once built?

                              1. 6

                                Think of it like this:

                                The core open-source community is the pizza dough that provides long-term stability and maintenance, and the academia/businesses provide the toppings.

                                Some languages have a large base of long-term, open-source contributors – they are family-sized pizzas which can accommodate a lot different toppings. People can get their favorite piece, everyone is happy.

                                Scala is different. It has barely any substantial open-source contributors remaining – and everyone is fighting over the toppings to be placed on that coin-sized piece of pizza dough. As a result the kitchen is a complete mess, and nobody is happy.

                                Scala’s problem is not the focus of businesses or academia, but that it doesn’t have any focus on it’s own.

                                There is literally no one left (since Paul walked away) who is able to establish or uphold any kind of technical standards, or tell people “no, we are not adding another 6 new keywords to the language”. Heck, they couldn’t even get their compiler test suite green on anything newer than Java 8, but they kept publishing new versions anyway since 2017!

                            2. 2

                              conclude that my technical decision to use Scala is sound while the political decision frustrates me at times.

                              Of course you would claim it’s sound, but is that objective? What do you base that decision on? Genuinely curious here.

                              1. 2

                                Thanks for asking. I based my decision mostly on architectural analysis with a healthy dose of personal experience. We’re building a new product with cloud and on-premises components. Originally, I’d intended to build everything from scratch but my team identified some OSS that did 100% of we needed with some extra complexity in managing it. The trade-off was a much faster time to delivery in exchange for playing in someone else’s sandbox and by their rules. That aspect was worth it but I lament not being able to write it all in Scala!

                                However, our cloud services are written mostly in Scala. We have a few stateless microservices for which Scala was a great fit: high-performance OOTB for what within six months will be ~500 req/s and within two will likely exceed 4,000 req/s for one component with another being far more I/O bound. We’re integrating heavily with ecosystems that have Java libraries and we’ve found decent Scala wrappers that give us idiomaticity without building and maintaining them ourselves. We could have used just about any stack for these couple of services but Scala’s enabled us to express ourselves in types and exploit the advantages of functional programming. I’m chasing the holy grail of “it’s valid because it compiles” but we’ve got enough unit tests to complement our design that I’m pretty sure we’re on the right track.

                                One notable failure was on service that is primarily a user-interactive web app. We had two false starts with Scalatra, which we’re using for the other services, and Play! before switching to Ruby on Rails (temporarily) out of frustration with documentation, lack of examples, lack of drop-ins like what are commonly available in the Rails ecosystem, from whence I came many moons ago. We chose Rails because of the component owner’s experience with it as well as my experience with it and JRuby, knowing that if we started to implement any shareable logic, we could do so in a way that all of our apps could consume. We learned about Lift too late and http4s and some others didn’t give us the right impression for a web app. I just learned about Udash last week and it may be a candidate for replacement. However, it’ll be several months: even uttering R-E-W-R-I-T-E would be kiboshed from on high at the moment as the component does what we need right now.

                                Moving forward, we’ll be looking at moving some of these services to http4s, etc. once more of my team is comfortable with more hardcore Scala FP. Writing AWS Lambda functions in Rust is also on my radar, as a part of our on-prem product is written in Rust.

                                1. 1

                                  Out of curiosity, did you evaluate Elixir? The obvious draw is the Ruby-like syntax but it seems that the Phoenix framework has a great concurrency story on top of being a Rails-inspired ‘functional MVC’ framework.

                                  1. 1

                                    There was a running joke on my team and another about us throwing everything away and doing it all in Elixir! I mused about it a little but decided that we were better off in RoR because we can hire for it more easily if we ended up committing to the RoR implementation long-term and because we could run RoR on JRuby as a transition step back to Scala, should we have enough business logic to merit a shared implementation. So far, the latter hasn’t been the case since the app is 95% CRUD.

                                    I’d really like to see an analysis of the websocket concurrency story of Elixir, Scala, and Rust, with perhaps some others for more general applicability. Our app is using good ol’ fashioned HTTP requests right now but we’ve identified some opportunities to shave some transfer overhead by switching to websockets eventually.

                                    1. 1

                                      That’s funny. I guess Elixir has some visibility in this space. I’m planning to use it to build a proof-of-concept. For me it’s the developer experience, combined with the performance and concurrency profile. Here are some comparisons: https://hashrocket.com/blog/posts/websocket-shootout

                                      More results available here: https://github.com/hashrocket/websocket-shootout

                                      1. 1

                                        Thanks for those. I’m sad that they’re out of date but the info is useful nonetheless.

                              2. 2

                                I don’t think that’s fair at all. There are a number of actively maintained projects that are at the center of the community.

                                I do think there are relatively few well maintained libraries outside of that core, but I think that’s a result of the community being relatively small, and the escape hatch of having java libraries available to do almost everything.

                                Community (non corporate) projects and organizations:

                                • All typelevel projects
                                • monix
                                • Scalaz
                                • Sbt
                                • Ensime

                                All widely used, all with large contributor bases.

                                Additionally the alternative compilers, scalajs and scala native.

                                1. 4

                                  At least from my perspective it seems that:

                                  • Typelevel is more busy in dealing with US politics than writing software,
                                  • Monix is largely a one-man show,
                                  • Scalaz is one of the main departure points for many people that move to Haskell,
                                  • Sbt is so great that everyone is trying to replace it, and
                                  • Ensime …? You just literally read the article by the Ensime creator.
                                  1. 2

                                    I do think there are relatively few well maintained libraries outside of that core

                                    The Scala community does have a solid core and near-core extension community. I consider libraries like Monix, Cats, and Scalaz to be nearly a part of the standard library because of how often they are used. sbt and ensime are important but they’re not exceptional: every stack needs a build tool and editor integration. These are solid now, and I appreciate the work that goes into them. Frankly, it wasn’t until sbt hit 1.0.0 that I considered it ready for widespread use because of its obtuseness/unergonomic interface prior to then. I’m eager to see what Li Haoyi’s mill will become.

                                    Things I’ve noted in the past that I’ve found in a less-than-desirable state compared to other stacks:

                                    • Internationalization - fragmented ecosystem, sbt plugins outdated
                                    • Authentication & Authorization - no nearly drop-in solution like Devise in the RoR ecosystem
                                    • Project websites being down for weeks because someone forgot to re-up the TLS cert, even in a LetsEncrypt automation world
                                    • Out of date documentation to the point of being dangerous practices with little more than a “someone submit a PR to fix that”. I get that maintainers get busy but when safety is the topic, is it acceptable to wait for a drive-by contributor to get it right and contribute the correction? What if they do that and then no maintainer merges it for years?

                                    The Scala Center has the promise of addressing much of it but I speculate that it’s insufficiently funded to be the ecosystem plumbers and teachers it aspires to be. I’ve been impressed with its work so far, though.

                                1. 7

                                  It looks like they haven’t even paid the previous fine and are still appealing: https://www.theverge.com/2017/9/11/16291482/google-alphabet-eu-fine-antitrust-appeal

                                  I guess this one will go the same way? Even if their appeals are unsuccessful, these fines are probably not a big deal if they are able to drag these things out for years (ie cost per year wouldn’t be that high). By the time they have to pay and change their practices, they might have some other strategy in place.

                                  This reminds me a lot of Microsoft of the 2000s.

                                  1. 6

                                    I’m not sure if its a recent change but Google will have to pay the fine into a trust account if they want to appeal. either way they will have to pay now and if the win the appeal then they get it back ( without interest)


                                    posted from my phone

                                    1. 2

                                      That’s good to know. It might be a bit more convincing then.

                                      1. 2

                                        That’s great, apparently they did learn from Microsoft!

                                      2. 3

                                        There are five different investigations the EU is doing into Google. They are at different stages. The previous fine is being appealed now. https://imgur.com/6uLtQX5

                                        This chart comes from a WSJ article from the day of the fine announcement.

                                        1. 1

                                          They may be fined for every day of not complying. EU is effective: if there’s will to do so.

                                        1. 5

                                          There are tiny modern unixes for microcontrollers (LiteBSD, RetroBSD).

                                          For desktops, you can’t “take advantage of modern hardware” when you fit on a floppy. You need to take advantage of SMP (and even NUMA) processors, ACPI, GPUs, 10GbE NICs, NVMe SSDs… and not floppy drives.

                                          1. 1

                                            Couldn’t you target qemu/vmware/virtualbox virtual devices and save yourself all the trouble, if hardware support is a concern?

                                            1. 1

                                              What’s the point of a “minimalist Unix for enthusiast desktop” if you’re not going to run it on metal? Doesn’t the presence of a hypervisor ruin the whole minimal feel?

                                          1. 1

                                            I’m going on 10 years with (x)ubuntu for personal use. At the big dumb corps I’m usually at, I suffer with windows, and use cygwin. The few times I’ve been issued macs, it’s just jarring. I’ve never spent enough time with them to get used to the bsd heritage. I personally just don’t see the appeal.

                                            1. 14

                                              I’ve been using Macs for nearly a decade on the desktop and switched to Linux a couple of months ago. The 2016 MacBook Pro finally drove me to try something different. Between macOS getting more bloated each release, defective keyboard, terrible battery life, and the touch bar I realized that at some point I stopped being the target demographic.

                                              I switched to Manjaro and while there are a few rough edges as the article notes, overall there really isn’t that much difference in my opinion. I’m running Gnome and it does a decent enough job aping macOS. I went with Dell Precision 5520, and everything just worked out of the box. All the apps that I use are available or have equivalents, and I haven’t found myself missing anything so far. Meanwhile it’s really refreshing to be able to configure the system exactly the way I want.

                                              Overall, I’d say that if you haven’t tried Linux in a while, then it’s definitely worth giving another shot even though YMMV.

                                              1. 4

                                                terrible battery life

                                                Really? It’s that bad? The Dell is better?

                                                1. 3

                                                  I don’t know about Dell, but my 2016 MacBook Pro was hit pretty hard after the Specter/Meltdown fix came out. I used to go 5 or 6 hours before I was down to 35-40%. Now I’m down to %20-25% after about 4 hours.

                                                  1. 2

                                                    Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops. Quite the debacle.

                                                    In regards to the parent, I have actually been considering moving from an aged Macbook Pro 15” (last of the matte screen models – I have avoided all the bad keyboards so far), to a Mac /desktop/ (mac pro maybe). You can choose your own keyboard, screen, and still get good usability and high performance. Then moving to a linux laptop for “on the road” type requirements. Being able to leave work “at my desk” might be nice too.

                                                    (note: I work remotely)

                                                    1. 3

                                                      I honestly don’t understand the fetish for issuing people laptops, particularly for software development type jobs. The money is way better spent (IMHO) on a fast desktop and a great monitor/keyboard.

                                                      1. 2

                                                        Might be the ability to work remotely. I’m with you, though, that laptops are a bizarre fetish, as is working from Anywhere You Want(!)

                                                        1. 2

                                                          It’s an artifact of, among other things, the idea that you PURSUE YOUR PASSIONS and DO WHAT YOU LOVE*; I don’t want to “work anywhere” – I want to work from work, and leave that behind when I go home to my family. But hey, I’m an old, what do I know.

                                                          *: what you love must be writing web software for a venture funded startup in San Francisco

                                                      2. 2

                                                        Same here. I wonder if the specter/meltdown fiasco has at all accelerated Apple’s (hypothetical) internal timeline for ARM laptops.

                                                        I wouldn’t guess that. Apples ARM design was one of the few also affected by meltdown. Using it for a laptop wouldn’t have helped.

                                                        1. 1

                                                          I bought a Matebook X to run Arch Linux on and it’s been pretty great so far.

                                                          1. 1

                                                            I’ve been thinking about a librem 13. I’ll take a look at the matebook too. Thanks!

                                                      3. 2

                                                        Yeah I get 4-6 hours with the Dell, and I was literally getting about 2-3 hours on the Mac with the same usage patterns and apps running. I think the fact that you can be a lot more granular regarding what’s running on Linux really helps in that regard.

                                                        1. 5

                                                          +1 about deciding what you run on GNU/Linux.

                                                          I have a Dell XPS 15 9560 currently running Arch (considering switching to NixOS soon), and with Powertop and TLP set up I usually get around 20 hours (yes, 20 hours) per charge on light/normal use.

                                                          1. 1

                                                            Ha! Thanks for this I didn’t know these were available!

                                                            1. 1

                                                              No problems! They’re very effective, and are just about the first package I install on a new setup.

                                                    1. 5

                                                      Thank god they clarified that this is NOT a compliment, because I am not a C programmer, but back when I was, code with three *’s made me cringe in the rare cases I saw it, and made me cringe even harder if there were no parens or other indicators to help me understand what the @$##@ you were trying to do with this code.

                                                      1. 1

                                                        They way I always thought of it, when I was doing C, was that each * was dimension of an array, so * would be [], ** would be [][], *** is [][][] etc. Did anyone else think of it that way?

                                                        1. 1

                                                          Yes but IMO in that case I would always choose the [][] because they make it utterly clear what you are trying to do, whereas general pointer dereferencing could be anything.

                                                      1. 2

                                                        Yup… And bug trackers the world over are still littered with things like… https://bugs.ruby-lang.org/issues/8770

                                                        1. 1

                                                          Can you clarify what you mean here? I love this piece and refer to it frequently myself. I think it brilliantly illustrates why many things in tech are the way they are. I feel like you’re suggesting a point I’m missing.

                                                          1. 8

                                                            Yes, it does illustrate brilliantly why things in tech are the way they are.

                                                            In fact, why many things in our economy are the way they are.

                                                            Their solution to the PC losering problem is to transfer the complexity from the OS to the user.

                                                            ie. Every author of a program, that invokes almost any system call, must on every invocation, remember to correctly handle, the possibility of it returning EINTR.

                                                            Conversely, setting up a good test that proves that your code handles EINTR correctly, every time, is hard and you receive no help from the OS in doing so.

                                                            Now if the effort that has now been deployed on literally tens of thousands of packages on fixing obscure sporadic EINTR related bugs….. had been expended on solving the PC losering problem correctly…

                                                            We would all be much much better off.

                                                            ie. The cost of solving the PC losering problem has been externalised to all users of syscalls. This enabled the “Worse is Better” solution to win in the market place by winning the “time to market” race.

                                                            Alas, as is the case with very very many parts of our economic system, we reward via perverse incentives the “cheats” that externalize their costs…. but in the long run our entire civilization pays and pays and pays.

                                                            1. 2

                                                              Alas, as is the case with very very many parts of our economic system, we reward via perverse incentives the “cheats” that externalize their costs…. but in the long run our entire civilization pays and pays and pays.

                                                              Is there any known solution to this problem that would not also sacrifice a lot of good things in the process?

                                                              1. 2

                                                                I think you will find any and every proposed solution will be condemned out of hand, and indeed fought tooth and nail, by those benefiting most from externalizing their costs.

                                                                Thus caution is advised when listening to anyone saying “It won’t work”.

                                                                The world seems to be (deliberately) stuck in this foolish black xor white thinking about economic systems, instead of more thoughtful and nuance debate.

                                                                ie. I think the world needs it’s systems refactored, not rewritten.

                                                                ie. We should be focusing on sinks of productivity and value, and tweaking the rules to reduce them.

                                                        1. 6

                                                          Finished up with a client so I’ve had a lot of free time to work on my novel, Farisa’s Crossing. I think there’s a good chance (over 90%) of my having it ready for an April 26, 2019 launch. We’ll see where it goes.

                                                          A few job interviews. Really looking for something stable and interesting, although there’s seems to be a very high Planck’s constant (or perhaps I am a small-mass particle) which makes it rare to find both.

                                                          Last week simplified the rule set for Ambition. Going to run some simulations with more realistic players (i.e., more intelligence than “random legal”) to make sure all the changes make sense, although I feel pretty good already.

                                                          1. 1

                                                            Are you looking for an interesting problem space or interesting problems? I am currently working as a developer on a platform for preschool institutions (nurseries, kindergartens, etc.) and while it seems boring on the surface, we have a ton of interesting challenges (also owing to a nice tech team that has a lot of lee-way in technology). If you had asked me 1 year ago if this area would be something I would be working in, I would have said no. Just a thought.

                                                            Book sounds super-interesting. Anywhere to sign-up for a release e-mail or similar notification?

                                                            1. 1

                                                              I’d be interested in talking to your company about their work, for sure. I’m capable as a data scientist, engineer, and manager. I’m in the DC area and can’t move (my wife’s job is here). And I can only work 9–5 (that is, not insane startup hours) because of my writing schedule.

                                                              I’ll reach out privately about the book. I’m going to recruit beta readers (between 10 and 20 is ideal) between now and September (with the book ready by then but a full read “due” in January). The time commitment at an average reading speed is about 1.5 hours per week (it’s a big book; projected WC is 220k, which is the main reason I’ve decided to self-publish it) over 16 weeks.

                                                              1. 1

                                                                Is this brightwheel? The ios app is entirely too chatty, shouldn’t req/resp on every user action.

                                                                1. 1

                                                                  No, we are not in the US per se (except for a few “beta” customers), so it’s unlikely you will have come across us - https://famly.co.

                                                            1. 3

                                                              The only problem here is that if an attacker was able to steal your token in the first place, they’re likely able to do it once you get a new token as well. The most common ways this happens is by man-in-the-middling (MITM) your connection or getting access to the client or server directly.

                                                              For my education; is there another security mechanism that will protect from MITM or server being compromised? Seems like if your JWT token is being stolen then you’ve got bigger problems.

                                                              1. 6

                                                                JWTs are commonly stolen via XSS – which is particularly common today as more and more of the auth logic is being pushed into the browser which is an insecure channel.

                                                                There are some new things browsers are attempting to implement to help mitigate this risk, including token binding, https://datatracker.ietf.org/wg/tokbind/documents/, but to my knowledge there is no active implementation of token binding in a major browser yet.

                                                              1. 23

                                                                I believe we are beginning to see the downfall of YouTube as we know it. They are really going way and beyond to ruin their own platform/reputation.

                                                                1. 8

                                                                  That has been happening for couple years now. All the content that made youtube popular are nowadays shunned and banned by recommendation algos. In short, if it cannot be monetized by US linear TV standards, it cannot be found in search or recommendations. So unless you already have several hundred thousand followers (and ads enabled), your content is family friendly and you have used thousands of dollars worth of equipment there are no new viewers.

                                                                  This did hit people filming motorcycle related videos pretty hard, as apparently that is very media unsexy content in US. Which happens to most of my youtube subscriptions, from most I watch every video they produce. And my youtube “home”/“recommended” section is full of everything that is not related in any way to my most watched stuff.

                                                                  1. 7

                                                                    yes. This is the straw that breaks the camels back. The blocking of help videos of a 3D modeller is going to be the downfall of YouTube. Unable to learn how to use their 3D modelling software, the masses will wander off to different venues in droves.

                                                                    /s

                                                                    (without snark: nobody outside of our little circle here cares about this. Not the advertisers, not youtube, not the general audience, not the press. The is entirely inconsequential to youtube’s future)

                                                                    1. 4

                                                                      You might compare it to gentrification. You cater to the middle ground, the cool stuff around the edges is pushed out, the really creative people abandon the platform, you’re left with the most generic content. Blender is just the latest victim of a broad trend.

                                                                      Most people may not “care” about Blender specifically, but they should care about an opaque platform that caters to the IP needs of multinationals in overly broad ways and incentivizes some really messed up behavior.

                                                                    2. 4

                                                                      It will be awesome to see what the video hosting landscape will be like when PeerTube reaches its height of popularity!

                                                                      1. 3

                                                                        I was checking peertube yesterday and it’s a huge change from youtube user experience. A lot more involved, and a lot less intuitive. I have hard time imagining mass adoption with what I saw. Are there any good beginner friendly tutorials/intros to peertube out there?

                                                                        1. 3

                                                                          Take a look at https://d.tube/ too. It’s much closer to the youtube experience.

                                                                          1. 1

                                                                            You can always checkout this I guess: https://joinpeertube.org/en/#how-it-works

                                                                      1. 8

                                                                        Following any proscriptive approach (agile or otherwise) doesn’t work in my experience. People seem to be completely seduced by the idea that there is a magic methodology that works for all development teams. Appopriating models from other industries (Lean Manufacturing, Kaizen) and selling them as cookie cutter solutions..

                                                                        It’s a shame the agile manifesto ever became more than an enlightened discussion-starter. Maybe that happens when you set down a manifesto!

                                                                        Organizing the work of a typical development team (of 3-8 people) is not that hard when you don’t fixate on process between deliverables - i.e. trusting competent, motivated people to do what they’re good at.

                                                                        The “methodology” should not go beyond:

                                                                        1. Define what you need to do - one sentence per feature/capability
                                                                        2. Roughly size the tasks and choose a deadline
                                                                        3. Prioritize
                                                                        4. Do it :)
                                                                        5. Assess
                                                                        6. Go to step 1

                                                                        Sorry if this comes across as arrogant and dismissive, not intended as such. I just think a large portion of the software engineering industry is deluded.

                                                                        1. 5

                                                                          competent, motivated people

                                                                          Hints at your unstated step 0:

                                                                          1. Find competent motivated people who focus on solving business problems, build pragmatic maintainable systems, and choose technology based on merit, not resume bingo.
                                                                          1. 2

                                                                            What of a complex product, where domain knowledge is outside of the development team? Consider the outcome of your approach for say a Payroll System?

                                                                            Let’s say I want you to write a simple CLI based double-entry general ledger program. I suggest you could not do either steps one or two of your methodology with success.

                                                                            1. 2

                                                                              I’m not saying to write one sentence to define all that you need to do to build a general ledger program. Defining what you need to do could be a series of conversations with stakeholders, team brainstorms, requirements gathering - getting an shared understanding of the problem in whatever way works best for the project team. My point is that you don’t need to proscribe how that task definition is reached (appreciate the irony of me proscribing it :) You don’t need a mental model to follow for every project.

                                                                              Regarding estimation, this falls out of defining the problem as a series of discrete capabilities of the system. In your example, an accountant could help define what the acceptance criteria for the simplest possible application that could be considered a general ledger. From that a team that understands the domain should be able to break it down to features or capabilities that can be easily described. That may result in more than 1 iterations worth of work, but some experienced heads should be able to give estimates within 20% accuracy for each capability.

                                                                              Of course, I’m also not implying that everything is simple. I threw out that 1-6 cycle only as a argument against process-heavy methodologies. In a complex system, that cycle could be very short and only cover a fraction of the final product.

                                                                              Hell, much of what I’m rabbiting on about is agile stuff, my only point is that I think agile is only useful as a starting point for a team to find their optimal way of working together. As far as success goes, I’d say it’s far more important to have the right people and attitudes within a team than following the 10 Commandments of Agile. A human project team isn’t a machine to be oiled.

                                                                          1. 7

                                                                            I finde this article to be uninspiring, ill informed, and juvenile. IME there are two basic questions that ought to concern anyone; is the (business critical) data consistent? And who’s responsible for keeping it consistent? In either case, as an application developer I unequivocally REJECT that responsibility.

                                                                            1. 14

                                                                              When I was learning C++, it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++. I guess that depends on what experience you bring into it; I picked it up relatively late in my computer life. It’s come in handy at some jobs, though; there’s always that legacy system that no-one really wants to touch, and I get to feel like a badass for volunteering. So for me, the value of learning C++ is simply that I can work on existing code, as well as read/learn from cool projects that happen to be written in it. I don’t think it’s rewarding in itself, but I agree on your point about confidence.

                                                                              1. 6

                                                                                it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++.

                                                                                That’s a good way to phrase it, I thought that as well, but failed to articulate it. Also, c++ is the ultimate technical interview “stump the chump” quizz show language. I was once on a java interview, and I have c++ on my resume because I used to use it a lot. The interviewer started in on some c++ questions. I said ‘woah, is this a c++ position here?” No, it’s java was the reply. “Let’s stick to java shall we?” I did not get an offer, wouldn’t have accepted anyway.

                                                                                1. 4

                                                                                  it felt to me like very little of that knowledge was transferable. I wasn’t learning concepts or techniques that I could apply elsewhere, just the idiosyncrasies of C++

                                                                                  Same here. Later, I learned it was features of wildly-different languages merged together in a C-compatible way. Then, they kept extending it. The result is a mess of a language. PreScheme and Modula-3 had cleaner, consistent designs with plenty of features. With good design, they also compile really fast. Slow compiles were a major factor in keeping me away from C++. For modern stuff, D’s design eliminated that with many benefits. Rust was slow to compile last I looked but brought major benefits to justify it. Nim has nice syntax, macros, and C compatibility. Idk about compile times.

                                                                                  So, looking at competition, I find C++ to just be unnecessarily hard to learn and use due to its design choices. Different design choices could’ve improved syntax, safety, compile time, or even more runtime efficiency. The good news is there’s alternatives now with decent ecosystems, one a great ecosystem. Unless doing legacy code, I’d say invest effort into one or more of those instead. Further, remember that ports of legacy components don’t necessarily require knowing the language: one might team up with a person that knows that language for them to translate the code for you into pseudocode or something.

                                                                                  1. 5

                                                                                    Yeah, Rust is my weapon of choice for new projects. Compile times are pretty bad on my old T420, but is slowly improving. I am eagerly looking forward to possible alternative backends for debug builds.

                                                                                    1. 2

                                                                                      I keep using C++ for two reasons. One, Bjarne is the greatest language designer because he never gave up and created a language that is only becoming more relevant over time. Other designers give up and end up making new languages, one after another. Two, it turns out the world is a messy place and C++ has lots of symmetry breaking features. “Clean elegant” languages fail to make programming for humans as easy.

                                                                                      1. 4

                                                                                        What is the value in a symmetry breaking feature?

                                                                                        1. 2

                                                                                          I think one is addressed mostly by the economic and social side of it like with C/UNIX’s spread. It can be an advantage but allows for designing better languages that similarly use ecosystem power. Even lots of C++ compatibility without as many disadvantages. ZL was an attempt that gets no attention. On your second point, C++ seems to have unnecessary complexity and performance impact vs its competitors which mean it’s unclean or inelegant for unjustified reasons. A better C++ could be created that reduced the difficulties or gave even more benefits justifying dealing with the language complexity. I already named some doing that.

                                                                                          There’s definitely a lot of critical apps written in C++, though. Very unfortunate, too, since I wanted to apply strong assurance technology to some of them. There was hardly anything to use compared to C. The learning curve was also huge compared to some others. Had to back off on that but still thinking conceptually about translaters, including like ZL.

                                                                                    1. 3

                                                                                      One problem with std::optional, at least at the moment, while it’s relatively new, is that std is opinionated, so you often won’t find library functions that work with a std::optional-based codebase.

                                                                                      For example, parsing an integer from a string is a classic example of a function which might not succeed. So it would make sense to use std::optional to store the result. However, the standard library provides int stoi(const std::string& str, std::size_t* pos = 0, int base = 10) and friends, which signal failure by throwing exceptions.

                                                                                      So, in theory, std::optional provides an alternative way to handle failure, somewhat like some haskell or rust code might, making the possibility of failure explicit in the type, and thus forcing you to explicitly handle it or pass it on. However, (unless a library exists which I’m not aware of?) you may need to reimplement large parts of the standard library to make them fit.

                                                                                      1. 3

                                                                                        Right. “This is a feature of the standard library!” means something entirely different in C++ than in other programming languages.

                                                                                        1. 2

                                                                                          Can you make it much less of a headache by defining a generic function that takes a lambda, calls it in a try/catch, returns the successful value from the try branch, returns nullopt from the catch?

                                                                                          1. 1

                                                                                            There are any number of workarounds, that obfuscate the code to varying degrees. This same situation arose with Optional in java 8, it’s there, but not really, so a lot of places you’d like to use it you have to go through similar contortions. The other problem is if interacting with different teams writing different parts of the app; everyone has to be on the same page or you’ll end up wrapping/unwrapping optional all over. And libraries. In the end I found optionals were a lot of trouble for very little gain.

                                                                                            1. 1

                                                                                              I did wonder about that. However, blindly catching all different exceptions and effectively discarding the information about which exception it was seems unwise. Of course you could keep the information while still using sum types, but then you don’t really want std::optional, you want an either type which can hold either a valid value or an error code. I’m not sure whether the standard library has one of these or whether you’d have to roll your own.

                                                                                              1. 1

                                                                                                blindly catching all different exceptions and effectively discarding the information about which exception it was seems unwise

                                                                                                Sure, I wouldn’t be very happy with a blind try/catch around something like a database access or RPC call. Just if the thing you’re wrapping is something really boring like (say) parsing a string into an integer, the exception if it goes wrong isn’t going to be very interesting anyway.