1.  

    (…) I think it’s being downvoted by people who expect the discussion about it to be contentious. (…)

    I’m in the camp that thinks we need to have very serious talks about ethics in software engineering, but that’s not really what lobste.rs wants to be about and in this case is even just going to be a discussion about Gitlab in particular. So I’m downvoting it for exactly the reason you said.

    1.  

      the subculture of the compulsive programmer, whose ethics prescribe that one silly idea and a month of frantic coding should suffice to make him a life-long millionaire

      The more things change, the more they stay the same.

      1.  

        That’s why the CLOUD Act was passed last year.

        1.  

          By the way, I think mathematical provability is a vain hope. With multi-year large team efforts we can not prove that a 1,000 line program can not be breached by external hackers, so we certainly won’t be able to prove very much at all about large AI systems.

          seL4 is 10 times larger and comprehensively verified.

          The good news is that us humans were able to successfully co-exist with, and even use for our own purposes, horses, themselves autonomous agents with on going existences, desires, and super-human physical strength, for thousands of years.

          This compares super-human physical strength with super-human intelligence, which is really wrong. Co-existence with super-human strength does not imply co-existence with super-human intelligence, not at all.

          1.  

            I think the queueing is independent of what I choose for messaging. If I use REST, for example, and I want a reliable transaction then I can just pop a request transaction and a callback onto a queue that will guarantee a response with retry logic.

            You can bundle multiple calls together as necessary.

            The same endpoint can be used without the queue at all.

            I’d rather design the operation to handle B failing and degrade itself gracefully.

            1.  

              Oh! Didn’t realize someone else already posted this here until just now that I posted the announcement on emacsconf-discuss and was about to post it here :)

              We have an awesome lineup of talks this year, and I’m very excited for November 2.

              1.  

                It stops short of recommending any particular active method, which IMO was particularly wise of the authors.

                Good eyes. I think it’s safe to recommend random, blinded selection from a pool where only their work or writings are seen. It sounds too unbiased if anything. I mean, the real bias would be in the inputs, if anything person-specific came off in the writing/projects, and so on. Yet, it’s knocked out a lot of issues while sounding pretty non-discriminatory (i.e. random). That could be default for folks that are worried about perception. It can be phrased as a possibility instead of strong recommendation.

                My method remains looking for X number of talented people in each group, making that the pool, and looking at random samples of entire pool in a blinded way. That way you’re biasing the supply side equally across groups. This is the social justice aspect that will be controversial. From there, each person earns their place based on performance. They know some tricks might have loaded supply side a bit to combat discrimination. However, each person that got in is because they earned it.

                Although I haven’t run it for proof, I strongly believe that a method creating that perception of earned placement… one that’s actually true… would make a world of difference vs things like traditional A.A. or non-white/non-male-only focus in “diversity” initiatives. This isn’t just for my group: lots of non-[majority] oppose methods that are or look like hand-outs favoring knowing they earned it. Probably intrinsic to human nature.

                Combating the injustice problem in a fair way that doesn’t create resentment is a high-priority issue for me. My experience in the South motivates me to avoid re-igniting tensions where possible. My solution focuses on fairness since that perception or need seems to have higher effect on whether reaction to a method gets acceptance or extreme push-back. I encourage people to have at this one to see if it works or is a flop. All I ask is credit for my contribution. If a flop, I’ll own up to that, try to figure out why it flopped, and come up with something more effective. My responsibility since I pushed it.

                1.  

                  It embarrasses me to admit this, but Python is the only language I use professionally in which I find myself opening files irrelevant to what I’m working on to copy & paste from the last time I figured out how to call a function. Sure, the official documentation is impenetrable, but that pales in comparison to my daily experience of quick-viewing a function signature only to be greeted by something like:

                  apply_function(a, b, key=False, type='default', *args, **kwargs)

                  1.  

                    1.1. The first one instantly knocks out many working for surveillance-oriented and for-profit, owner/shareholder-focused companies. Maybe since some have a large benefit to the public. Folks looking for jobs tend to be able to go for one that aims for more good. Those that don’t might not be able to agree with this.

                    1.2. The second one I’m already practicing. You have to be willing to turn down six or more digits to do this. On abstract side, the problem with Do No Harm is one sometimes has to do harm to create the opportunity to do good. They actually acknowledge that in their Do No Harm section. Maybe I could agree to it in a conditional way.

                    1.3. Be honest and trustworthy. Again, I can mostly do this. The second I go do business with mass or enterprise markets I’ll not be able to do this. That’s because marketing requires at least selective omission of truths to highlight one’s own products and/or protect their trade secrets. Sometimes outright deception is called for if the environment is predatory in a way that makes the honest disappear. I can be as honest as I can in any given situation, though. The Code doesn’t allow for that here.

                    1.4. I help and call everyone out equally or close as I can. I’m already boosting folks that need it. I should be way ahead on this like 1.2. Accessibility tech is only thing I’d be behind on, probably updating my knowledge before a product release. Only problem here is if they force a specific belief system or type of practice here. Long-time readers know there’s going to be an ACM meta on that.

                    1.5. Respect I.P. laws. I’m a strong opponent of current I.P. laws calling for reforms. The U.S. system also allows people to get patents without doing anything to earn them before suing real inventors for massive money in often-rigged cases. DMCA abuse is rampant. Although I’d deal fairly, I think I can’t argue I’d respect, according to some courtroom somewhere, all DMCA notices or patent claims. I’d be a target of claims at some point. I’d say fuck them and ACM before I’d stay a member. Introspectively, I feel good knowing we got as far as I.P. laws before I strongly said “Fuck that!”

                    1.6 and 1.7: Privacy and confidentiality. The name is nickpsecurity. So, of course. :)

                    I’ll just stop there since this is a long comment with plenty for people to consider. I think the nature of capitalism or at least demand-side of employment contracts also makes some stuff in Section 2 questionable. Might not be able or willing to do some of it. Maybe another write-up another day. Also, why look at it if at least one requirement, maybe two, already disqualified me from being “ethical” enough for the ACM? ;)

                    I love ACM/IEEE as a researcher. They have great content. I strongly encourage people to get a membership to at least one to see cutting-edge research. Most stuff is cross-posted to both. I just might have to cross my fingers behind my back if I click “I Agree” checkbox on a thing or two. All I’m saying… Oh shit, there I go failing Section 1.3 again… I wonder how many pentesters could make it haha.

                    1.  

                      Ah, that’s good to hear. For some reason the post made me think it wasn’t signed, just base64 encoded.

                      1.  

                        The footer is signed, just not encrypted. So it is verified.

                        1.  

                          This feels like another dirty hack, shifting the responsibility to both kernel and userspace devs.

                          1.  

                            It’s useful to distinguish “produce the same results, following the same method” (say, a hardware recreation of a hardware device) and “produce the same results, following a different method” (say, a software recreation of a hardware device).

                            While some people use “simulation” and “emulation” respectively to mean those things, other people define them the other way around. For example, a “flight simulator” follows a very different method from a real aircraft, and “fluid simulation” generally aims to “look right” rather than throw billions of tiny particles around.

                            To avoid confusion, I stick with “hardware emulation” to describe reverse-engineered devices like the Pocket, and “software emulation” to describe reverse-engineered programs.

                            1.  

                              ASN.1 is infamously hard to parse. I don’t think anything new and even remotely security oriented uses it, for good reason.

                              I mean, if we’re going old and RFC specified, why not XDR? 😉

                              1.  

                                Do you think time to comprehension is a function of the person and the codebase together?

                                Certainly there are factors like familiarity with the general design and tools which will affect it.

                                But also I have noted a “That’s the way I think” factor.

                                I find reading “man bash” hurts my head, so many of the choices are “Not the way, I personally, think.” On the otherhand, most of the choices made by Matz, the Ruby guy, are the way I personally think. So I find the ruby standard libraries a breeze to read… (Sadly the .c files are a bit of a pain)

                                That said, follow on principles emerge from my principle irrespective of “the way you think”… ie. Connascent Coupling is Bad. Very Bad.

                                Lots of globally accessible state is Very Bad, makes it very very hard to reason about causality.

                                1.  

                                  I saw this and thought it was a good explanation of some of the issues with different ways of visualizing a dataset and then going into proposed fixes.

                                  Please let’s not sidetrack into the political aspect.

                                  1.  

                                    a homeowner who followers the builders around everywhere,

                                    Plenty of engineers complained about this. Scope creep and overbearing clients are universal.

                                    I can attest to that. I’ve trained and worked in software for 20 years now, but I originally trained to be a TV-repairer. One repairshop I had a placement in during training had a notice on the wall, that could be seen by customers:

                                    We charge 300/hour.
                                    If the customer wants to watch, we charge 600/hour.
                                    If the customer wants to help, we charge 900/hour.

                                    To be totally honest I believe it was meant more as a humerous deterrent than to be taken literally, but there’s no smoke without fire as they say :-)

                                    1.  

                                      What is the benefit to the footer being completely unverified?
                                      Seems to me that it would make the footer both untrustworthy and potentially dangerous (exposes parsing to unverified input).

                                      Another aspect of jwt that I always disliked, was that reserved keys are mixed in with data keys in the claims section. Why not just have claims be a separate section entirely from data, or at the very least, a dedicated data: {} subsection?

                                      (Also wish the overall encoding was using something like tnetstrings/netstrings instead of json, with just the dedicated data section using json, but I guess json is so ubiquitous these days it is more or less expected)

                                      1.  

                                        I think the ‘public good’ stuff is a pretty sharp turn away from the culture we have now. Unclear whether affected industries (ex: free-to-play videogames) have many ACM members on staff.

                                        Calling for ‘active’ preservation of diversity is controversial (I support it personally but the arguments over that are certainly ongoing in public). It stops short of recommending any particular active method, which IMO was particularly wise of the authors.

                                        Other than that it’s pretty standard professional body stuff - same as any other professional body code of ethics I’ve read (that is, accountancy and law in Australia).

                                        1.  

                                          Do you think time to comprehension is a function of the person and the codebase together? For example, there are some cases where I’ve understood something about a bit of code a lot faster than a someone else, and vice versa. In cases like these, I think, neither the reader of the code, nor the code itself can independently explain “time to comprehension.”