1. 2

    Self-hosted on Debian using iredmail. Used to run my own exim/MySQL setup for years (& qmail before exim), but decided to switch hosting provider and do everything fresh with all the latest everything. Started getting deep into research on imap and smtp servers and spam and antivirus filters and dkim and dmarc and … eventually just gave up and used iredmail, even though it uses postfix (which I’d avoided for years because of the whole djb/Venema spat). It’s really easy to set up, uses secure protocols by default in every part, still let’s me have complete control over it all, and gives me roundcube webmail for free (even though I barely ever use that, mostly imaps). I’ve been running it for about 2.5 years and never ever have to look at it apart from the odd upgrade, which is reliable & generally works pretty easily too.

    1. 4

      Hopefully they only hide www. when it is exactly at the start of the domain name, leaving duplicates and domains in the middle (like notriddle.www.github.io and www.www.lobste.rs) alone.

      1. 43

        How about just leaving the whole thing alone? URI/URLs are external identifiers. You don’t change someone’s name because it’s confusing. Such an arrogant move from google.

        1. 11

          Because we’re Google. We don’t have to care know better than you.

          1. 3

            Eventually the URL bar will be so confusing and arbitrary users will just have to search google for everything.

            1. 5

              Which is of course, Google’s plan and intent, all along. Wouldn’t surprise me if they are aiming to remove URLs from the omni bar completely at some point.

          2. 3

            It’s the same with Safari on Mac - not only do they hide the subdomain but everything else from the URL root onwards too. Dreadful, and the single worst (/only really bad) thing about Safari’s UI.

            1. 3

              You don’t change someone’s name because it’s confusing

              That’s why they’re going to try to make it a standard.
              They will probably also want to limit the ports that you can use with the www subdomain, or at least propose that some be hidden, like 8080

              1. 2

                Perhaps everyone should now move to w3.* or web.* names just to push back! Serious suggestion.

              2. 1

                Indeed, but I still think it is completely unnecessary and I don’t get how this “simplifies” anything

              1. 1

                Wasn’t sure if this was really on-topic, mainly wanted to see what lobsters think about the idea of Google & Amazon voluntarily doing this, particularly given Google’s reported China search plans.

                1. 4

                  I feel it’s inevitable that somebody breaks some service’s security using SNI/Host confusion. I’m not sure what the attack looks like, but I know it has to happen. :) Something like cache poisoning maybe. https://portswigger.net/blog/practical-web-cache-poisoning

                  1. 3

                    I run an SSL termination endpoint with multiple domains on it. I would not, in a terms of service sense, permit one of those domains to domain front by setting the SNI field to another domain I was incidentally also hosting. I do however have a domain that does nothing but handle SSL/TLS requests when there is no SNI field. It allows my endpoint to gracefully degrade in to an error message that can be delivered via https. Typically this would only happen with old and deprecated SSL software and connections, but I wouldn’t be bothered by a domain owner using this already dedicated domain in their SNI field if they wanted to.

                    I’m a long way from having the problems or concerns highlighted in this article, but you did ask what folk thought.

                    1. 2

                      Makes sense. Thanks :-)

                  1. 1

                    Here’s an example from this post:

                    export default handleActions (
                      {
                        ADD_TODO: (state, action) => {
                          return {
                            ...state,
                            currentTodo: '',
                            todos: state.todos.concat (action.payload),
                          };
                        },
                        LOAD_TODOS: (state, action) => {
                          return {
                            ...state,
                            todos: action.payload,
                          };
                        },
                        UPDATE_CURRENT: (state, action) => {
                          return {
                            ...state,
                            currentTodo: action.payload,
                          };
                        },
                        REPLACE_TODO: (state, action) => {
                          return {
                            ...state,
                            todos: state.todos.map (
                              t => (t.id === action.payload.id ? action.payload : t)
                            ),
                          };
                        },
                        REMOVE_TODO: (state, action) => {
                          return {
                            ...state,
                            todos: state.todos.filter (t => t.id !== action.payload),
                          };
                        },
                        [combineActions (SHOW_LOADER, HIDE_LOADER)]: (state, action) => {
                          return {...state, isLoading: action.payload};
                        },
                      },
                      initState
                    );
                    

                    To me this and Redux in general both look like a half-baked reimplement of JavaScript’s class, except the method names are UPPER_CASE because they are “constant”, and state is managed elsewhere and passed as a parameter instead of being stored on this. Why reinvent this syntax? Is it just so we can be sure we’re using “functional programming” and have “no internal state”?

                    If you want to eliminate boilerplate, cut the knot. Let me write my reducer as a class, and then generate the action creators, action names, etc from the class’s declared methods. You can still have all the benifits of redux while using a pleasing syntax.

                    A sketch (pardon, on mobile):

                    class TodoReducer extents Reducer {
                      static initialState = () => ({ ... })
                    
                      addTodo(state, todo) {
                        return {
                          ...state,
                          currentTodo: '',
                          todos: state.todos.concat(todo),
                        }
                      }
                    
                      // more methods ...
                    }
                    
                    // each camel-case methodName has a constant case METHOD_NAME.
                    // Returns an object with each constant case method name mapped to itself. 
                    export const actions = TodoReducer.actions()
                    
                    // creates an object mapping methodNames to functions that return an action object with type 
                    // set to the constant case name of that method
                    export const actionCreators = TodoReducer.actionCtreators
                    
                    // creates a reducer function from the class that handles dispatching redux actions as regular method calls on an instance of the class.
                    // the wrapper function could also do hand-holding like assert there is no state being recorded in the instance
                    // or, a new instance could be created for each dispatch, although the perf would suck
                    export const reducer = TodoReducer.reducer()
                    
                    1. 1

                      Personally I’d much rather avoid method-name magic, and I find splitting actions/constants/reducer into 3 flies in a module is an easy way to organise it, even if it is a fair chunk of boilerplate, and it all ends up seeming a lot simpler and less overblown than that example where it’s all done inline. Yes it could totally be more concise, but I really like explicit. Just my opinion, obvs.

                      1. 1

                        To me, the core benefits of redux are:

                        1. All (important) state in one spot: simplifies reasoning and decision making
                        2. Dependency injection: callers and consumers both have no knowledge of the state holder
                        3. serializability: actions are simple objects that can be recorded, streamed across the network, replayed, ….

                        What’s the purpose of the CONSTANTS file? How do you use those constants, other than to put them in the { type: } field of a serializable action, or to reason about an action in a reducer?

                        How does moving the method names of a class into another file improve organization? What benefits do you see in an explicit composition from parts usually left implicit in other patterns?

                        (As for the CONSTANT_NAME magic proposed in my example: sure, it’s immaterial. Trade following redux style convention for less magic.)

                        1. 1

                          I think those are the core benefits too :-) No purpose for such files other than to keep everything in one place, make it easily accessible, and know easily where I am with everything, and never have to remember name-conversion conventions or any of that stuff. Nothing more - but for my way of working it’s clearer and easier.

                      2. 1

                        One reason is that actions and reducers share a one to many relationship. You can decompose a single reducer into several reducers and potentially all of those reducers handle the same action. Actions are not meant to be remote function calls.

                      1. 5

                        Generative artwork has been dominated by the intelligent and the intellectual. It has been obsessed with the discovery of clever algorithms and optimizations.

                        I think the author should provide evidence of his premise. I can’t think of any generative art where the method was emphasized more than the outcome. Maybe my experience is an exception to the rule.

                        Secondly, I’m not sure that the method doesn’t matter. He makes some very broad generalizations about the audience: “the viewer doesn’t give a shit if Rothko ground his own pigment by hand.” I don’t know how many viewers tend to care about that detail of Rothko’s work in particular. But I think many viewers do care about the methods used to produce Sol Lewitt’s Instruction-based art. And those works, which are essentially algorithms manually executed by assistants, are a much closer analogue to generative art produced with a computer. It is banal to point out that people, both artists and audience, are different and will find meaning in different things. Some people find the method meaningful. Others don’t. This should be something we all already understand.

                        In general, this piece lacks nuance and fails to meet a basic standard for critical thinking.

                        1. 3

                          I can’t think of any generative art where the method was emphasized more than the outcome.

                          Algorithmic symphonies from one line of code and Some deep analysis of one-line music programs.

                          1. 2

                            What’s completely missing from the article is that inspiration often comes through process and as a product of the repeated, practiced application of process.

                            Agnes Martin’s work is a great example, it’s dense and obsessive and preternaturally powerful, she spent years and years doing it over and over, her work got denser and more powerful the more she did it and as far as I can tell the inspiration happened in tandem(/symbiosis) with or as a function of her relentless work on it. Sure there can be an inspiration to make a great work but without the skill and the process required to realise it the great work doesn’t happen.

                            Rothko’s process was fundamental, and because we don’t have any Rothko work to which he didn’t apply his process, we have no way of knowing whether art he produced without doing so would have been able to produce the extraordinarily deep and emotionally rich work he produced with it. I’m no expert but I’m guessing no, the reason the work is so strong is not only the process nor only the inspiration but a combination of the two, honed and enriched by years of practice, until the process and the inspiration are completely intertwined and function as one. Which is why his work was so extraordinary and why he was such an extraordinary talent. And why it probably makes a great deal of difference to the viewer that he ground his own pigment, even if they’re not aware of it, because that was necessary to achieve the inspired effect he was aiming for.

                            And - here’s the point - all of this is why it’s so incredibly rare to get genuinely good, meaningful generative art, because the mix of sufficient technical skill with sufficient artistic inspiration is so damn rare, and (in my view at least) a majority of it is people who are good at math or geometry or OpenGL and fancy themselves as artists, but are far from that, because they either haven’t got the natural inspiration as a starting-point or they haven’t spent long enough developing it yet.

                            So yes, of course an overly intellectual focus will likely produce a less emotionally connecting or inspired piece of work - but pure emotion or inspiration without technical capability, at least in something as fundamentally technically complex as generative art, will likely produce works that just aren’t very good. Arguing it should be more of one aspect than the other seems to me to be missing the point.

                            (Post written on the bus in a moment of inspiration subsequently edited on the desktop for technical correctness and, uh, coherence)

                          1. 3

                            The first rule of asymptote club: you can talk about asymptote club, but you can’t join.

                            1. 1

                              Lol. You would need to be a familiar lobster (reminds me of Jordan B Peterson), anyways.

                            1. 1

                              I think the author is the submitter here: I can’t see any images on mobile Safari. Maybe a WebGL thing? Perhaps you could substitute some animated gifs for whatever should be in the black boxes I see in your post?

                              1. 1

                                I’m not seeing any image output/result on desktop Safari either. Works fine in Chrome.

                                1. 1

                                  Must be because of Safari’s poor WebGL support. https://gph.is/2lul96P

                                1. 3

                                  Really fun and informative writing from a friendly perspective about something pretty niche that I otherwise wouldn’t have known about, let alone understood. A+ post, would read more by this poster (as if @calvin hadn’t already posted loads of great stuff ;-)).

                                  1. 8

                                    All of the times I’ve upgraded my personal computer in the last two decades have been because my web browser, of all things, got slow to the point of annoyance. Didn’t need better graphics, a bigger disk, or more RAM for VMs at all. Just faster browsing.

                                    I don’t know how much I trust this “cloc” tool, though. I’ve used it a few times and found it to sometimes wildly inaccurate. (I mean, even considering what a generally useless metric “lines of code” is anyway.) For example, I’m going to be quite surprised if either Chrome or Firefox have any Visual Basic, C Shell, Tcl/Tk, or Pascal in them.

                                    Or, if they do, it would be more interesting to know what those files are for.

                                    1. 3

                                      My guess is that the dependencies are bundled in the repo, and they have all the other stuff that seems out of place.

                                      1. 1

                                        Also cloc just guesses based on file extension. I have a project I check with it which has a bunch of intermediate .d clang build files, and if I don’t clean those out first it thinks I’m writing loads of D. But otherwise in terms of actual counting and speed cloc seems pretty good fir me anyway.

                                    1. 42

                                      GitLab is really worth a look as an alternative. One big advantage of GitLab is that the core technology is open source. This means that anybody can run their own instance. If the company ends up moving in a direction that the community isn’t comfortable with, then it’s always possible to fork it.

                                      There’s also a proposal to support federation between GitLab instances. With this approach there wouldn’t even be a need for a single central hub. One of the main advantages of Git is that it’s a decentralized system, and it’s somewhat ironic that GitHub constitutes a single point of failure.

                                      1. 17

                                        Federated GitLabs sound interesting. The thing I’ve always wanted though is a standardised way to send pull requests/equivalent to any provider, so that I can self-host with Gitea or whatever but easily contribute back and receive contributions.

                                        1. 7

                                          git has built-in pull requests They go to the project mailing list, people code review via normal inline replies Glorious

                                          1. 27

                                            It’s really not glorious. It’s a severely inaccessible UX, with basically no affordances for tracking that review comments are resolved, for viewing different slices of commits from a patchset, or integrating with things like CI.

                                            1. 7

                                              I couldn’t tell if singpolyma was serious or not, but I agree, and I think GitHub and the like have made it clear what the majority of devs prefer. Even if it was good UX, if I self-host, setting up a mail server and getting people to participate that way isn’t exactly low-friction. Maybe it’s against the UNIX philosophy, but I’d like every part of the patchset/contribution lifecycle to be first-class concepts in git. If not in git core, then in a “blessed” extension, à la hub.

                                              1. 2

                                                You can sort of get a tracking UI via Patchwork. It’s… not great.

                                                1. 1

                                                  The only one of those Github us better at is integration with CI. They also have an inaccessible UX (doesn’t even work on my mobile devices, can’t imagine if I had accessibility needs…), doesn’t track when review comments are resolved, and there’s no UX facility for viewing different slices, you have to know git stuff to know the links

                                                2. 3

                                                  I’ve wondered about a server-side process (either listen on http, poll a mailbox, etc) that could parse the format generated by git request-pull, and create a new ‘merge request’ that can then be reviewed by collaborators.

                                                  1. 2

                                                    I always find funny that usually, the same people advocating that emails are a technology with many inherent flaws that cannot be fixed, are the same people that advocate using the built in fit feature using emails…

                                                3. 6

                                                  Just re: running your own instance, gogs is pretty good too. I haven’t used it with a big team so I don’t know how it stacks up there, but I set it up on a VPS to replace a paid Github account for private repos, where it seems fast, lightweight and does everything I need just fine.

                                                  1. 20

                                                    Gitea is a better maintained Gogs fork. I run both Gogs on an internal server and Gitea on the Internet.

                                                    1. 9

                                                      Yeah, stuff like gogs works well for private instances. I do find the idea of having public federated GitLab instances pretty exciting as an alternative to GitHub for open source projects though. In theory this could work similarly to the way Mastodon works currently. Individuals and organizations could setup GitLab servers that would federate between each other. This could allow searching for repos across the federation, tagging issues across projects on different instances, and potentially fail over if instances mirror content. With this approach you wouldn’t be relying on a single provider to host everybody’s projects in one place.

                                                    2. 1

                                                      Has GitLab’s LFS support improved? I’ve been a huge fan of theirs for a long time, and I don’t really have an intense workflow so I wouldn’t notice edge cases, but I’ve heard there are some corners that are lacking in terms of performance.

                                                      1. 4

                                                        GitLab has first-class support for git-annex which I’ve used to great success

                                                    1. 8

                                                      I still don’t get why HTTPS not just TLS. Because of the server coalescing? Don’t like the sound of that much, in practice maybe lots of sites do get served from a few CDNs, but is that the centralising/monopoly-operation-normalising kind of thing we want to be enshrining in open source browsers? Oh Cloudflare are helping to push it? Hmmmm

                                                      1. 7

                                                        DNS over TLS also a thing that’s been spec’d. The problem is that so many pieces of networking hardware have ossified over the years that there are real challenges to introducing new protocols on the internet. Using an existing protocol is a solution to that.

                                                        1. 4

                                                          Ah right, that does make some sense. Even though the server coalescing etc is HTTP/2 which ossified hardware is hardly going to support. But even still, HTTPS seems like a complex & possibly heavyweight protocol to use as a carrier for comparatively simple payloads, no?

                                                          1. 6

                                                            Port 853 (DNS over TLS) is easy to block (in collateral freedom sense). Port 443 (HTTPS) can’t be blocked.

                                                            1. 4

                                                              If “block any and all DNS” is a viable approach for censorship, it’s pretty easy to change the port. There’s no reason to use a nearly unimplementably complex protocol stack to serve DNS.

                                                              1. 1

                                                                That’s the best argument I’ve heard for it, by far. I wonder if there’d be some way to smart-multiplex protocols over 443 though. Mongrel2 used to do it I seem to recall.

                                                                1. 1

                                                                  Years ago I used a reverse proxy to do exactly that. Unfortunately I can not remember the tools I used.

                                                                  Probably stunnel and iptables on the server were used but I cannot really remember the tricks. I also had to do some tricks on the client, probably.

                                                                  1. 2

                                                                    I have experience with both sniproxy and sslh. I never looked whether or not they support DNS+TLS or could easily be taught DNS+TLS.

                                                            2. 3

                                                              But it’s not a new protocol – it’s TLS. If a middlebox can tell what is going over TLS in order to treat it differently, we refer to the situation as an “attack”.

                                                              1. 4

                                                                There are plenty of situations in which TLS interception is consented to – corporate MITM boxes are the popular example – and they absolutely cause problems with deployment of new protocols (TLS 1.3 is filled with examples).

                                                                (I should note that TLS MITM boxes in my experience are all hot garbage and people shouldn’t use them, but there’s nothing wrong with them from a TLS threat modeling perspective.)

                                                                1. 1

                                                                  There are plenty of situations in which TLS interception is consented to – corporate MITM boxes are the popular example

                                                                  Yes, but at that point, changes to DNS don’t help – you have a social problem, not a technical one. The group that is putting in the MITM boxes has the ability to force you to reveal your traffic regardless of what technology you put in. You’ve lost by default.

                                                                  1. 3

                                                                    You don’t have to be trying to defeat that person, the goal would simply be to make sure it doesn’t break when deployed.

                                                                2. 2

                                                                  The middle box knows the node ips.

                                                                  It might be enough, for censorship.

                                                                  1. 2

                                                                    You might be onto something, scarily enough. They are actling like Cloudflare is a reputable middleman.

                                                                    1. 2

                                                                      You mean, like 1.1.1.1, which is used to serve TLS over HTTP?

                                                                      This isn’t a problem that throwing HTTP into the mix solves.

                                                                      1. 1

                                                                        This isn’t a problem that throwing HTTP into the mix solves.

                                                                        You really don’t need to convince me.

                                                                        My first thought when I read about this was: where is the hypertext? I can’t think of me explaining to my grandchildren 20 years from now why we decided to use something designed to distribute HTML for DNS responses.

                                                                  2. 3

                                                                    The problem is that so many pieces of networking hardware have ossified over the years that there are real challenges to introducing new protocols on the internet.

                                                                    While I understand your argument, I always think of what ancient Egyptians would think of our “real challenges”.

                                                                    Compared to people from 5000 years ago, we are all sissies.

                                                                    1. 4

                                                                      The Egyptians never tried to coordinate hundreds of vendors, tens of thousands of deployments, and a billion users to update their network protocols.

                                                                      I’m sure we could do better, but there are legitimate challenging technical problems, combined with messy incentive problems (no individual browser vendor wants to cause a perceived breakage, since the browser is generally blamed, and that would give an advantage to their competitors, or cause people to not upgrade, which for a modern browser would be catastrophic to security).

                                                                      1. 1

                                                                        The Egyptians never tried to coordinate hundreds of vendors, tens of thousands of deployments, and a billion users to update their network protocols.

                                                                        You should really visit Giza.

                                                                        None of your arguments is false. But they are peanuts compared to building a Pyramid with the tools available 5000 years ago.

                                                                        We should really compare to such human endeavours before celebrating our technical successes and before defining an issue as a “real challenge”.

                                                                1. 1

                                                                  Awesome. IRIX 5.2 was my first ever unix. I had access to a couple of Indys at a tiny little web shop out the back of an Internet cafe with a 64k leased line where I worked in 1995. How I loved those things. (Along with the NeXT pizza boxes at college…) I still use the SGI screen font on my pre-retina iMac along with the best approximation I can of the desktop background & winterm colours, and it still makes me happy.

                                                                  1. [Comment removed by author]

                                                                    1. 5

                                                                      Nix images that run processes inside sandboxes (chroot + cgroups.)

                                                                      Sounds amazing! Do you have a write up of how you do that?

                                                                      1. 3

                                                                        At the risk of me-tooism, I’d really like to hear more about that too, if possible :-)

                                                                        1. [Comment removed by author]

                                                                          1. 1

                                                                            Thanks!

                                                                      2. 2

                                                                        I’d love to do that but this doesn’t solve the bin-packing of services that I need at scale…

                                                                      1. 1

                                                                        Wonderful. Can someone make it run OpenStep? (please?!)

                                                                        1. 10

                                                                          I’m quite disturbed by the “killing ex-girlfriend” jokes being just dropped in there like they’re par for the course. Is it only 13 years since that seemed OK to the point where none of the many commenters there thought it worth mentioning? Did none of the (currently) 14 upvoting Lobsters not think it worth mentioning either? Was it really OK back then anyway? Just a bit “edgy”? Wouldn’t just “your ex” have done just as well for an edgy joke, unless part of the subtext was “yes I know it’s dark AF but we’re all boys together here, aren’t we, and it’s cool and funny for boys to joke about which tools are best for killing their ex-girlfriends”? In which case, eesh, no wonder women struggle in the kind of environment which tacitly accepts this sort of thing.

                                                                          I know in-groups have their own humour and acceptable levels and I’m glad they do, but … doesn’t putting that kind of thing on a public forum have a bunch of problematic implications? And then even more so on another one, 13 years later, when the obvious problems with this kind of stuff have been made manifestly clear? I hate to be a party-pooper because I can’t bear over-engineered factory-factory stuff like it’s bemoaning either, but wow, in a post in a technical forum resurrecting another one, this careless aside sure makes me feel uncomfortable.

                                                                          1. 5

                                                                            People have short memories and a tendency to retcon righteousness into them. Oh, it’s uncool to think that now? Good thing I never thought that way! And suddenly all the formerly bad people disappear overnight.

                                                                            1. 4

                                                                              Yeah I was thinking about that too, because I totally had a WTF reaction as well. I was actually going to comment a few hours ago but I was caught up playing games. Oops.

                                                                              But when I think about it, in 2005 this wouldn’t have been out of place. I’ve noticed watching movies from the early 2000s has the same jarring effect, for similar reasons.

                                                                              1. 2

                                                                                It barely seems out of place today. These things happen in cycles. Any time there’s a big awareness push that actually makes progress, like this year’s #MeToo, the exclusionary remarks get a little more coded for a few years. As @tedu said, people stop saying stuff, and maybe slightly reduce how they think it… but the same people are still around.

                                                                                Then when enough time has passed that the recent movement starts to fade in everybody’s memories, the jokes get more frequent and more obvious again.

                                                                                I want to clarify that these “jokes” never go away. They change to be more subtle, so that people who aren’t the targets don’t spot them as much. And the venues change so that exclusionary behavior happens mostly in places where either no targets are going to see it to call it out, or no non-targets are going to witness it (ie 1:1 conversations).

                                                                                This particular example is about gender, but a similar cycle happens with all forms of bigotry.

                                                                              1. 8

                                                                                The main problem of official twitter client and website is not ads (there are few ads and with not so poor quality), but facebookisms: out of order timeline, tweets from users I’m not subscribed to (just because someone liked them), “who to follow”, putting “trends” consisting of junk news in front of my nose.

                                                                                1. 2

                                                                                  I would gladly put up with slightly more adds in exchange for fixing all these things, and giving me a more advanced filtering mechanism.

                                                                                  1. 3

                                                                                    I would pay a small monthly fee to have all those things with no ads.

                                                                                1. 8

                                                                                  Personally, I’m getting really close to filtering cryptocurrencies.

                                                                                  1. 5

                                                                                    My side of the debate predicts you won’t miss anything. ;)

                                                                                  1. 5

                                                                                    NIFs are rad. I know they’re often used for performance reasons but I really like them for exposing specific libs, or OS SDK functions (e.g. Core* stuff on MacOS) to Erlang code. NIF/BEAM provides a mechanism for keeping data around on the C side and a “reference” type that lets you wrap & return a C pointer and pass it around in your Erlang code, so that can be super nice - you can do all the allocation/release nitty-gritty neatly in your C code, and leave yourself free to just think about the application structure & flow in Erlang, which for me is one of the areas where it really shines. Also NIF provides a lot of “interface”-style stuff that (AFAICT) ports just don’t - when asking about interfacing with C code, Erlang people often say “just write a port and communicate over stdin/out”, which is great, but then you basically have to define your own comms mechanisms - whereas with NIF it’s just a function call and all the necessary methods for converting types are included.

                                                                                    1. 2

                                                                                      Oh, maybe you (or some other lobster!) can help me with a thing. To the best of my knowledge, Erlang (and hence Elixir) use bignums for integers…unfortunately, the docs don’t say anything about how to get that across over in C land. Any ideas?

                                                                                      1. 4

                                                                                        It doesn’t appear as though the NIF interface gives you an easy way to get Erlang bignums into your NIF. The icky way to pass in a bigint might be to encode it as a binary or string and decode it in the NIF. If you’re interested, you could take a peek at big.c and big.h. There are functions big_to_bytes, which should return an array of digits, and big_to_double, which should return a double (if the bignum fits).

                                                                                        1. 3

                                                                                          Sounds right. I’ve not had to share numbers bigger than 64-bit ints so I’ve just used enif_get_int64 (or enif_get_long, enif_get_uint64 etc) to do the conversion (and relevant enif_make_* the other way), and haven’t had any issues with that.

                                                                                          1. 1

                                                                                            Does that work in C ports?

                                                                                            1. 3

                                                                                              I don’t think so. I think you want to use ei functions like ei_encode_* and ei_decode_* for that.

                                                                                              1. 3

                                                                                                Aha, sure enough:

                                                                                                int ei_decode_bignum(const char *buf, int *index, mpz_t obj) Decodes an integer in the binary format to a GMP mpz_t integer. To use this function, the ei library must be configured and compiled to use the GMP library.

                                                                                                Thanks for the direction.

                                                                                    1. 5

                                                                                      This a fascinating case. It’s very unfortunate that the cyclist had to die for it to come before us. However, had the car been driven by a human, nobody would be talking about it!

                                                                                      That said, the law does not currently hold autonomous vehicles to a higher standard than human drivers, even though it probably could do so given the much greater perceptiveness of LIDAR. But is there any precedent for doing something like this (having a higher bar for autonomous technology than humans)?

                                                                                      1. 13

                                                                                        Autonomous technology is not an entity in law, and if we are lucky, it never will be. Legal entities designed or licensed the technology, and those are the ones the law finds responsible. This is similar to the argument that some tech companies have made that “it’s not us, it’s the algorithm.” The law does not care. It will find a responsible legal entity.

                                                                                        This is a particularly tough thing for many of us in tech to understand.

                                                                                        1. 25

                                                                                          It’s hard for me to understand why people in tech find it so hard to understand. Someone wrote the algorithm. Even in ML systems where we have no real way of explaining its decision process, someone designed it the system, someone implemented it, and someone made the decision to deploy it in a given circumstance.

                                                                                          1. 11

                                                                                            Not only that, but one other huge aspect of things nobody is probably thinking about. This incident is going to probably start the ball rolling on certification and liability for software.

                                                                                            Move fast and break things is probably not going to fly in the faces of too many deaths to autonomous cars. Even if they’re safer than humans, there is going to be repercussions.

                                                                                            1. 8

                                                                                              Even if they’re safer than humans, there is going to be repercussions.

                                                                                              Even if they are safer than humans, a human must be held accountable of the deaths they will cause.

                                                                                              1. 2

                                                                                                Indeed, and I believe those humans will be the programmers.

                                                                                                1. 4

                                                                                                  Well… it depends.

                                                                                                  When a bridge breaks down and kills people due to bad construction practices, do you put in jail the bricklayers?

                                                                                                  And what about a free software that you get from me “without warranty”?

                                                                                                  1. 4

                                                                                                    No - but they do take the company that build the bridge to court.

                                                                                                    1. 5

                                                                                                      Indeed. The same would work for software.

                                                                                                      At the end of the day, who is accountable for the company’s products is accountable for the deaths that such products cause.

                                                                                                    2. 2

                                                                                                      Somewhat relevant article that raised an interesting point RE:VW cheating emissions tests. I think we should ask ourselves if there is a meaningful difference between these two cases that would require us to shift responsibility.

                                                                                                      1. 2

                                                                                                        Very interesting read.

                                                                                                        I agree that the AI experts’ troupe share a moral responsibility about this death, just like the developers at Volkswagen of America shared a moral responsibility about the fraud.

                                                                                                        But, at the end of the day, software developers and statisticians were working for a company that is accountable for the whole artifact they sell. So the legal accountability must be assigned at the company’s board of directors/CEO/stock holders… whoever is accountable for the activities of the company.

                                                                                                      2. 2

                                                                                                        What I’m saying is this is a case where those “without warranty” provisions may be deemed invalid due to situations like this.

                                                                                                      3. 1

                                                                                                        I don’t think it’ll ever be the programmers. It would be negligence either on the part of QA or management. Programmers just satisfy specs and pass QA standards.

                                                                                                  2. 2

                                                                                                    It’s hard to take reponsability for something evolving in a such dynamic environment, with potentially used for billions of hours everyday, for the next X years. I mean, knowing that, you would expect to have a 99,99% of cases tested, but here it’s impossible.

                                                                                                    1. 1

                                                                                                      It’s expensive, not impossible.

                                                                                                      It’s a business cost and an entrepreneurial risk.

                                                                                                      If you can take the risks an pay the costs, that business it not for you.

                                                                                                2. 4

                                                                                                  It’s only a higher bar if you look at it from the perspective of “some entity replacing a human.” If you look at it from the perspective of a tool created by a company, the focus should be ok whether there was negligence in the implementation of the system.

                                                                                                  It might be acceptable and understandable for the average human to not be able to react that fast. It would not be acceptable and understandable for the engineers on a self-driving car project to write a system that can’t detect an unobstructed object straight ahead, for the management to sign off on testing, etc.

                                                                                                1. 5

                                                                                                  I like several bits of this and dislike several other bits, but one thing that stands out to me is the seemingly continual refusal to place responsibility on the people consuming and using tech:

                                                                                                  When software encourages us to take photos that are square instead of rectangular, or to put an always-on microphone in our living rooms, or to be reachable by our bosses at any moment, it changes our behaviors, and it changes our lives.

                                                                                                  Users volunteer for this. Most of the people taken advantage of by tech do so by sleepwalking, like lemmings, into the grim meathook future some of us create to monetize them. Nobody holds a gun to their head and says “Put Amazon Echo in your house or you get shot by the Bezostruppen.” Nobody says “Hey you should totally enter a multiyear contract for this smartphone that will bleed you dry and spy on you instead of using a cheapo burnerphone or else we will put you in jail.” There is no national law that says “Cititzen, you must participate in the two-minutes hate on Twitter or else your voting privileges will be revoked.”

                                                                                                  There is no end of the trouble we get into if we ignore the actions, the real actions, that got us here.

                                                                                                  1. 11

                                                                                                    Interestingly, lemmings don’t actually walk to their death as their environment typically only contains lakes they can swim across. Put them in front of an ocean though…

                                                                                                    Which is actually the perfect metaphor. Put people into environments they are unfamiliar with, maladapated to, and unable to even ask the right questions and it isn’t all that surprising that they won’t ultimately act in their interests.

                                                                                                    But I guess we can blame the people who software hurts for being hurt by that software, which they couldn’t hope to understand without deep study.

                                                                                                    1. 4

                                                                                                      When there is minimal consumer choice, it’s hard to blame consumers for making the wrong choice. Robust consumer choice would be something close to feature-by-feature optionality: smartphones without spy powers, or only with photo spy powers, for example. In point of fact, even a smartphone with a hardware keyboard is a non-option nowadays.

                                                                                                      This is in due in no small part to the limits and strengths of mass manufacturing: if everyone buys the smartphone that’s good for 51% of the people, we all enjoy a better phone for less money – but that puts the power of feature selection out of consumers’ hands. They get the phone that the designers designed: take it or leave it.

                                                                                                      The responsibility – moral and otherwise – for those features rests squarely with those who made the phone, not those who bought it.

                                                                                                      1. 4

                                                                                                        Nobody says “Hey you should totally enter a multiyear contract for this smartphone that will bleed you dry and spy on you instead of using a cheapo burnerphone or else we will put you in jail.”

                                                                                                        No, sure. But (to take just this example) the contract and the undeniable benefits of the smartphone, obviously without any reference to any potential downsides, are what’s advertised, sold, heavily pushed, to the extent that many won’t even realise there’s an alternative - and when availability of the features and capabilities provided are normalised to the extent that getting by without them involves significant extra effort, then in the majority sections of world outside of “people who understand, and can either afford or have to spend significant parts of their time understanding, technology”, that’s effectively all that exists.

                                                                                                        1. 1

                                                                                                          Exactly. Ill add this is true even when the constraints between two solutions are similar enough that the safer/quality/free-er one requires no sacrifice or less. Getting people to switch from texts to IM… important since texts were a downgrade from IM (esp with delays)… was hard despite equivalent usability, better thing being free, better thing having more features (optional though), some being private, and so on.

                                                                                                          An uphill battle even when supplier went above and beyond expectations making a better product for them. Usually laziness or apathy was reason when other factors were eliminated.