1. 19

    This interview with the (former) Helm maintainer is really good. Turns out he’s a alpine guide who never programmed for a living.

    https://sachachua.com/blog/2018/09/interview-with-thierry-volpiatto/

    1. 19

      Thanks for sharing this! I’m the author of Gleam (and this post). Very happy to answer any questions :)

      1. 6

        Thank you for your work on Gleam! It looks really promising, and it’s been great seeing it progress from the sideline.

        Is it easy to integrate it (e.g. writing one module in Gleam) in an existing Erlang + rebar3 project? (Is this documented somewhere?)

        1. 7

          Yes for sure. Currently we don’t have a dedicated build tool so all Gleam projects are rebar3 projects with a project plugin (https://github.com/gleam-lang/rebar_gleam), so compilation of Erlang works as per usual.

          There’s also a mix plugin for Elixir projects (https://github.com/gleam-lang/mix_gleam).

          The tooling is a bit rough-and-ready at the moment, I’m hoping to improve it in the near future.

      1. 9

        What is your favorite pitfall in Date?

        Has to be toISOString(). Claims to return ISO8601, which contains the timezone offset, but instead it just gives you the GMT string, even though it’s perfectly aware of the timezone information:

        // It's 15.44 in Europe/Warsaw
        > dt.getTimezoneOffset()
        -120
        > dt.toISOString()
        '2020-08-02T13:44:03.936Z'
        
        1. 5

          That is a valid ISO 8601 timestamp. The ‘Z’ (“zulu”) means zero UTC offset, so it’s equivalent to 2020-08-02T15:44:03.936+02:00.

          1. 3

            Oh, it is valid, yes. It’s just less useful than one containing the TZ information that is stored in that Date object. It’s correct, but less useful than it could be (and with little extra effort).

            1. 3

              Ah, I misunderstood you, then. When you wrote “claims to return ISO 8601” I thought you meant that it wasn’t actually an ISO 8601 string.

              So what you mean is that the “encoding” of the of the ISO 8601 string should reflect the local timezone of the system where you call .toISOString()? I.e. 2020-08-02T15:44:03.936+02:00 if you called .toISOString() on a CEST system and 2020-08-02T09:44:03.936-04:00 if you called it on an EDT system?

              1. 2

                I’d expect it to not lose the timezone information, given that it already uses a format that supports that information. It’s not incorrect, it’s just less useful that it could be. Perhaps that’s just the implementation, not the spec – but I’m yet to see it implemented differently. It’s not a huge deal, it’s just frustrating that it could’ve been better at a little cost and yet no one bothered, apparently.

                It’s not about the system it’s called on – that determines the timezone that’s already in the object, as my code snipped showed. I’d expect the data that’s already there to be included in the formatting, instead of being converted to UTC, lost and disregarded. If implemented correctly better, toISOString could’ve been a nice, portable, lossless serialization format for Dates – but as it is, a roundtrip gives you a different date than you started with, because it will now always come back as UTC.

                1. 2

                  I would actually assume that getTimezoneOffset is a class method that just looks at your system’s configured time zone and does not read anything from the Date object. I’m pretty sure the object does not store information about the timezone of the system in which it was generated, because it’s never needed. You can always convert to the timezone you want at read time.

                  This is also what PostgreSQL does. If you create a column for “timestamps with timezone” it will discard the timezone information at write time and just use UTC (because why not?). The only thing that is different when you choose a timestamp column with timezone is that at read time it will convert values from columns to the configured timezone. All it stores is the number of seconds since the epoch.

                  If you look at Firefox’s JS source, it looks like they also just store the seconds since the Unix epoch in a Date object, no timezone information: https://github.com/mozilla/gecko-dev/blob/d9f92154813fbd4a528453c33886dc3a74f27abb/js/src/vm/DateObject.h

              2. 3

                I don’t believe Date contains a time offset. As far as I’m aware, like many languages, the problem is not that the APIs ignore the time offset - they would have to silently reach into the client locale to get it, which would be misleading and make it easy to create bugs. the problem is that they named it “Date” when it’s really just a point in absolute time. Combine a Date with the client locale’s time offset and you’ve got yourself a date, but a Date is not a date.

            2. 5

              This is a namespacing error that’s common when methods are on objects like this. getTimezoneOffset is a property here of the client locale, not of the date time object.

            1. 5

              With all the enthusiasm for zettelkasten/second-brain like systems (roam, org-roam, now this), I’m surprised that nobody has been working on I haven’t heard of an external format/tool that various UI’s can interface. VSCode, at least that’s my impression, is the kind of editor that gets displaced from it’s throne every few years by the next new thing, as has happened to Sublime and Atom before, so I certainly wouldn’t be too confident in making my “second brain” depend on it, except maybe if it’s used as a brainstorming tool for projects, but then it would have to be distributable too – but from skimming the article that doesn’t seem to be the case.

              Edit: Fixed the first sentence, sorry for my ignorance. Also I missed that this is markdown based, so I guess the rest of the comment isn’t quite right either, but I guess/hope my general point is still legitimate.

              1. 6

                I’m surprised that nobody has been working on an external format/tool that various UI’s can interface

                Checkout neuron which is editor-independent, has native editor extensions, but can also interface (in future) with editors through LSP.

                Some examples of neuron published sites:

                Easiest way to get started (if you don’t want to install yet): https://github.com/srid/neuron-template

                1. 3

                  That sounds cool, but I don’t really get why LSP would help? I (personally) would much prefer a native client, in my case for Emacs, than something that forces itself into a protocol for program analysis.

                  1. 2

                    Well, neuron does have native extensions for emacs and vim (see neuron-mode and neuron.vim) - but LSP support just makes multiple editor support easier by shifting common responsibility to a server on neuron.

                    EDIT: I’ve modified the parent comment to clarify this.

                  2. 1

                    Is there any easier way to install (i.e. without nix?) I’m on a laptop and installing new toolchains is prohibitive for the low storage I have.

                    1. 1

                      Nix is the only way to install neuron (takes ~2GB space including nix and deps), until someone contributes support for building static binaries.

                      But I’d encourage you give Nix a try anyway, as it is beneficial even outside of neuron (you can use Nix to install other software, as well as manage your development environments).

                      1. 2

                        I got a working binary with nix-bundle, that might be a simpler option. It’s a bit slow though, especially on first run when it extracts the archive. nix-bundle also seems to break relative paths on the command line.

                        1. 1

                          Interesting. Last time I tried nix-bundle, it had all sorts of problem. I’ll play with it again (opened an issue). Thanks!

                  3. 3

                    Isn’t the markdown that this thing runs on exactly that external format, and one that has been getting adoption across a wide range of platforms and usecases at that?

                    1. 3

                      There is tiddlywiki and the tiddler format.

                      1. 2

                        I wish the extension used the org format instead of markdown (so if something happens to vscode, I can use it with emacs), but otherwise I totally agree with your comment!

                        1. 2

                          You can use markdown files with org-roam in emacs by using md-roam. I prefer writing in Markdown most of the time, so most of my org-roam files are markdown files.

                      1. 6

                        Is there a video link for this talk? I want to watch it!

                        1. 7
                        1. 10

                          This one has me conflicted. On the one hand, I understand this reasoning and agree with it, in the specific case discussed — a local University computing system. I think the author is probably making the right choice for their users.

                          On the other hand, I will still almost always recommend people just use UTC because it’s a safe long term default. I’ve worked with multiple companies now where all the servers are still on Pacific Time despite opening many global offices over the years. Many of their users and developers are now doing time zone math anyway, because they don’t all live in California anymore. But now they have the added adventure of daylight savings time! 😉

                          Granted, if your whole mission is focused on serving a given locality, like a University, you’re probably safe with local time. But even then… as soon as you look at computers intended for research collaboration, that might go out the window too. I’ve seen plenty of academic HPC systems that eventually have more external users than internal, as they get linked into wider research programs.

                          1. 5

                            On the other hand, I will still almost always recommend people just use UTC because it’s a safe long term default.

                            Anything should be a safe long-term default, as long as you’re consistent or store the corresponding time zone. The problems usually happen when you have something like 2020-05-15 17:56:34 and no idea which TZ that refers to, or need to get the user’s configured TZ (which may change). But if it’s stored as 2020-05-15 17:56:34 +0800 then it’s always safe and easily convertible to whatever timezone.

                            IMO “always use UTC” is much better phrased as “always know which TZ a time is”. Using UTC internally everywhere is often a convenient way of doing that, but not always.

                            1. 1

                              But if it’s stored as 2020-05-15 17:56:34 +0800 then it’s always safe and easily convertible to whatever timezone.

                              I think that’s a lucky example, since there’s little daylight savings out that way, but much of the world moves their clocks around, so you might share a DST-offset part of the year with someone when it matters, and you’re using these timestamps specifically for correlation.

                              The reason UTC is better is because we know where 0° longitude is and we know they don’t practice daylight savings. Was that the result of a car crash into a telephone pole at almost six o’clock? Time zone doesn’t tell you that in parts of the world, but UTC-reported dates will.

                              The reason UTC is worse, of course, is because sometimes people don’t provide UTC dates, and sometimes people confuse the time in London or Greenwich with UTC so their reports are bad, and people rarely make this mistake with local time (as the author points out).

                              1. 4

                                Won’t a format like that take care of DST? For example in Western Europe it would be +0100 in the winter, and +0200 in the summer.

                                1. 2

                                  Not always. Consider an event in the future planned before a decision to change daylight savings time. It’s still going to open at 9am whatever-is-local- on some given future date.

                                  1. 2

                                    Assuming the DST change is communicated clearly and in good enough time to the TZ database maintainers… you’d be surprised how often this is not the case.

                                    Off the top of my head, I can mention Jordan, Turkey, Armenia, Egypt and Russia announcing changes to their DST change schedules with very short notice.

                                    My fear is that this will happen in the EU too, considering that most politicians don’t really seem to understand the implications of changing the DST schedule…

                                    1. 1

                                      Even with past dates, calculations will only be correct iff a library is using a correct timezone database that includes all change history and is using it correctly. It also may not be the case.

                                      Using UTC and converting to local time only when needed saves one from a lot of “if’s”.

                                      1. 2

                                        Only if you’re using a format that doesn’t save the UTC offset, right? I don’t see how the interpretation of an ISO-8601 datetime like 2020-05-18T08:42:57+02:00 can change in the future.

                                        1. 1

                                          It can change if you’re planning to schedule something at 09:00 (AM) on 15 Jun 2022 in Berlin, and the current schedule for DST tells you that Germany will be observing DST at that time.

                                          Right now we don’t know what EU countries are going to do with DST - abolish it, and stay on normal time? Abolish it and stay on summer time?

                                          If Germany decides to stay on normal time your appointment at 2022-06-15T09:00:00+02:00 will be 1 hour later than expected (which is 9AM CET in this case).

                                          1. 2

                                            Yeah, that makes sense, but the comment above mine was talking about calculations on past dates.

                                            1. 1

                                              Sorry, I misinterpreted your comment! Yeah, past dates are generally “safe”.

                          1. 4
                            1. 23

                              It only works in Google Chrome and Microsoft Chrome, unfortunately:

                              For the best experience with Codespaces, we recommend using a Chromium-based browser, like Google Chrome or Microsoft Edge. Firefox is currently unsupported, and there are known issues using Safari.

                              1. 12

                                Codespaces allows you to develop in the cloud instead of locally. Developers can contribute from anywhere, on any machine, including tablets or Chromebooks

                                …and on iOS all browsers including Chrome use the Safari rendering engine so this doesn’t really open up development on the most popular tablet platform at all.

                                1. 1

                                  I imagine they will add that.

                                2. 4

                                  Before that note is this paragraph, though:

                                  During the beta, functionality is limited.

                                  So hopefully once it’s actually released it will be usable in every browser.

                                  1. 1

                                    It only works in Google Chrome and Microsoft Chrome, unfortunately:

                                    To be honest, it’s quite scary to run all of that inside a browser. Can you imagine the performance on that?

                                    1. 1

                                      It probably performs fine on most development machines, to be fair.

                                  1. 1

                                    I’d honestly entirely forgotten about Ogg. Is it still a thing in a meaningful sense, what with the MP3 patents having expired?

                                    1. 6

                                      Ogg is a container format, able to hold different codecs inside, Vorbis is the codec designed to replace MP3. Now Ogg is being used with the Opus codec, with quite success.

                                      1. 3

                                        Ugh, I thought that Opus had its own container format/was using MKV already …

                                        Tried to parse Ogg once, wouldn’t recommend.

                                      2. 6

                                        Spotify uses ogg vorbis for streaming to their apps: https://en.wikipedia.org/wiki/Spotify#Technical_information

                                        1. 5

                                          Ogg vorbis file size pretty regularly beat out vbr mp3’s at the max setting I could distinguish in a blind listening test. If a lossless source is available I always prefer encoding vorbis myself for use on my own (non internet) music player! The criticisms of the Ogg container make sense though. I’ve never really seen Vorbis in any other container tbh.

                                          1. 3

                                            Old-style WebM files used Vorbis in a Matroska container.

                                      1. 10

                                        The mastodon discussion that was linked to in the bottom, about the evergreen rust compiler and how that challenges “traditional systems” was really interesting.

                                        1. 2

                                          Kornel definitely thought about this and related issues for a while, e.g. see his #Rust2020 post: https://users.rust-lang.org/t/rust-2020-growth/34956

                                        1. 1

                                          398 days is a year and 33 days, a little over a month. Is there a particular reason for choosing 398 as the limit, versus rounding down to 365 or 366 (a year), 395 or 396 (a year and a month) or rounding up to 400?

                                          1. 2

                                            Apparently it’s a “renewal grace period” - not sure why that’s 32/33 days though.

                                            1. 3

                                              Maybe 2 days to account for weekends/holidays, then 30 days to allow a cert nenewal PR[1] to wend its way through the corporate bureacracy.

                                              [1] Purchase Request

                                            2. 2

                                              Pure speculation:

                                              One year is ⌈365.24⌉ = 366 days. One month is ⌈31 days + 1 leap second⌉ = 32 days.

                                              1. 3

                                                At least for ISO dates, the leap second is included in the preceding day (i.e. it’s extended by one second)

                                                2016-12-31T23:59:59
                                                2016-12-31T23:59:60
                                                2017-01-01T00:00:00
                                                
                                            1. 36

                                              I think this will not succeed for the same reason that RSS feeds has not (or REST). The problem with “just providing the data” is that businesses don’t want to just be data services.

                                              They want to advertise to you, watch what you’re doing, funnel you through their sales paths, etc. This is why banks have never ever (until recently, UK is slowly developing this) provided open APIs for viewing your bank statement.

                                              This is why businesses LOVE apps and hate web sites, always bothering you to install their app. It’s like being in their office. When I click a link from the reddit app, it opens a temporary view of the link. When I’m done reading, it takes me back to the app. I remain engaged in their experience. On the web your business page is one click away from being forgotten. The desire to couple display mechanism with model is strong.

                                              The UK government is an exception, they don’t gain monetary value from your visits. As a UK citizen and resident, I can say that their web site is a fantastically lucid and refreshing experience. That’s because their goal is, above all, to inform you. They don’t need to “funnel” me to pay my taxes, because I have to do that by law anyway. It’s like reading Wikipedia.

                                              I would love web services to all provide a semantic interface with automatically understandable schemas. (And also terminal applications, for that matter). But I can’t see it happening until a radical new business model is developed.

                                              1. 5

                                                This is why banks have never ever (until recently, UK is slowly developing this) provided open APIs for viewing your bank statement.

                                                This has happened in all EU/EEA countries after the Payment Services Directive was updated in 2016 (PSD2). It went into effect in September 2019, as far as I remember. It’s been great to see how this open banking has made it possible for new companies to create apps that can e.g. gather your account details across different banks instead of having to rely on the banks’ own (often terrible) apps.

                                                1. 6

                                                  The problem with PSD2 to my knowledge is that it forces banks to create an API an open access for Account Information Service Providers and Payment Initiation Services Providers, but not an API to you, the customer. So this seems to be a regulation that opens up your bank account to other companies (if you want), but not to the one person who should get API access. Registration as such a provider costs quite some money (I think 5 digits of Euros), so it’s not really an option to register yourself as a provider.

                                                  In Germany, we already seem to have lots of Apps for management of multiple bank accounts, because a protocol called HBCI seems to be common for access to your own account. But now people who use this are afraid that banks could stop this service when they implement PSD2 APIs. And then multi-account banking would only become possible through third-party services - who probably live from collecting and selling your data.

                                                  Sorry if something is wrong. I do not use HBCI, but that’s what I heard from other people.

                                                  1. 1

                                                    I work on Open Banking APIs for a UK credit card provider.

                                                    A large reason I see that the data isn’t made directly available to the customer is because if the customer were to accidentally leak / lose their own data, the provider (HSBC, Barclays etc) would be liable, not you. That means lots of hefty fines.

                                                    You’d also likely be touching some PCI data, so you’d need to be cleared / set up to handle that safely (or having some way to filter it before you received it).

                                                    Also, it requires a fair bit of extra setup and the use of certificate-based authentication (MTLS + signing request objects) means that as it currently sits you’d be need one of those, which aren’t cheap as they’re all EV certs.

                                                    Its a shame, because the customer should get their data. But you may be able to work with intermediaries that may provide an interface for that data, who can do the hard work for you, ie https://www.openwrks.com/

                                                    (originally posted at https://www.jvt.me/mf2/2019/12/7o91a/)

                                                2. 4

                                                  Yes, this does seem like a naive view of why the web is what it is. It’s not always about content and data. For a government, this makes sense. They don’t need to track you or view your other browsing habits in order to offer you something else they’re selling. Other entities do not have the incentive to make their data easier to access or more widely available.

                                                  1. 6

                                                    That’s very business centric view of the web, there’s a lot more to the internet than businesses peddling things to you. As an example, take a look at the ecosystem around ActivityPub. There are millions of users using services lile Mastodon, Pleroma, Pixelfed, PeetTube, and so on. All of them rely on being able to share data with one another to create a federation. All these projects directly benefit from exposing the data because the overall community grows, and it’s a cooperative effort as opposed to a competitive one.

                                                    1. 3

                                                      It’s a realistic view of the web. Sure, people who are generating things like blogs or tweets may want to share their content without monetizing you, but it’s not going to fundamentally change a business like a bank. What incentive is there for a bank to make their APIs open to you? Or an advertiser? Or a magazine? Or literally any business?

                                                      There’s nothing stopping these other avenues (like the peer-based services you are referring to) from trying to be as open as possible, but it doesn’t mean the mainstream businesses are ever going to follow suit.

                                                      I think it’s also noteworthy that there is very little interesting content on any of those distributed systems, which is why so many people end up going back to Twitter, Instagram, etc.

                                                      1. 1

                                                        My point is that I don’t see business as the primary value of the internet. I think there’s far more value in the internet providing a communication platform for regular people to connect, and that doesn’t need to be commercialized in any way. Businesses are just one niche, and it gets disproportionate focus in my opinion.

                                                  2. 3

                                                    Aye, currently there is little motivation for companies to share data outside silos

                                                    That mind-set isn’t really sustainable in the long term though as it limits opportunity. Data likes to date and there are huge opportunities once that becomes possible.

                                                    The business models to make that worth pursuing are being worked on at high levels.

                                                    1. 1

                                                      Ruben Verborgh, one of the folks behind the Solid initiative 1, has a pretty good essay 2 that details a world in which storage providers compete to provide storage, and application providers compete on offering different views to data that you already own.

                                                      Without getting into Solid any more in this post, I will say that there are a ton of websites run by governments, non-profits, personal blogs, or other situations where semantically available data would be a huge boon. I was looking through a page of NSA funded research groups the other day for McMurdo station 3, and finding what each professor researched on took several mouse clicks per professor. If this data was available semantically, a simple query would be enough to list the areas of research of every group and every professor.

                                                      One can think of a world where brick-and-mortar businesses serve their data semantically on their website, and aggregators (such as Google Maps, Yelp, and TripAdvisor) can aggregate them, and enable others to use the data for these businesses without creating their own scrapers or asking a business to create their own API. Think about a world where government agencies and bureaucracies publish data and documents in an easy to query manner. Yes, the world of web applications is hard to bring the semantic web to due to existing incentives for keeping data siloed, but there are many applications today that could be tagged semantically but aren’t.

                                                      1. 1

                                                        The web has been always used mostly for fluff since day 1, and “web assembly” is going to make it more bloated, like the old browser-side java.

                                                        The world needs user-centric alternatives once again.

                                                      1. 9

                                                        There are some clarifications and technical details in the comments on this issue: https://github.com/ninenines/cowboy/issues/1410

                                                        1. 2

                                                          Rust and Erlang are not the only programming languages with a good concurrency model. C is not dead or dying. Type systems do not objectively make it easier to program. Dynamically typed languages are not on the way out, obsolete or in any way bad. People do not need parametric polymorphism in their fucking text editor extension language. Emacs does not need a fucking web browser to ‘keep up with the kids’ or whatever ridiculous reasoning is being given. And Visual Studio Code isn’t even remotely close to Emacs in any respect at all, it’s a text editor you can write extensions for, just like virtually every other popular text editor in history. Taking lessons from Visual Studio Code is like taking lessons from Sublime Text or any other momentarily popular editor that is and will be forgotten about within a couple of years.

                                                          What Emacs should be taking note from is not Visual Studio Code, it’s vim. Vim is everything Emacs is not: fast, ergonomic and with a language that’s PERFECT for writing very small snippets of but an awful pain to write extensions with. Emacs Lisp is what you get if you write commands to your text editor in a proper programming language (100s of characters just to swap two shortcuts) while Vimscript is what you get if you write extensions to your text editor in a command language (sometimes harder to understand than TECO).

                                                          Vim is also evidence that trying to fix extensions with bindings to extra languages is a terrible idea. Vimscript needs to be improved, not replaced. Emacs Lisp is the same: improvements are necessary, not its replacement with something else. It’s not just bad to replace Emacs Lisp entirely, adding a new language beside it and making Emacs Lisp ‘legacy’ also means that in order to access new parts of the Emacs infrastructure, extensions/modes need to be rewritten anyway.

                                                          The contention made that C needs to go to make contributing to Emacs more accessible is.. frankly insane. C is one of the most widely known languages in the world. It’s very easy and accessible to learn. Rewriting code parts of Emacs in an obscure, esoteric language like Rust is not going to make contributing to Emacs easier, it’s going to make it much harder. It should also be made quite clear that Rust is not a ‘guaranteed safe language’ as is claimed here. Not even close. It’s not at all safe. It literally has a keyword called unsafe that escapes all the safety that the language provides. Under certain conditions that are very easy to check Rust is totally safe, which is what it provides over C, and even outside those conditions, any spots that introduce unsafety are enumerated with grep -Re unsafe, but it’s absolutely untrue that Rust is safe, and any gradual move from C to Rust in Emacs would involve HUGE swathes of unsafe Rust to the point where it’s probably safer to do it in C++ than in Rust, simply because the infrastructure for checking C++ programs for safety are stronger than the infrastructure for checking Rust-programs-that-are-full-of-unsafe for safety.

                                                          A very statically typed language like Rust makes no sense for a text editor like Emacs. The strength of Emacs is that it’s insanely dynamic. It has a relatively small bit written in C, and most of it is actually written in Elisp.

                                                          1. 4

                                                            Yeah, I very much disagree with his desire to get away from a real Lisp. The value proposition of Emacs is very much about having a dynamic, dynamically-typed, dynamically-scoped (yes, I am aware of the optional lexical scoping) Lisp extension language.

                                                            You are right that vim makes simple things simple, but man-oh-man is VimScript a hideous misfeature of an extension language. I do think that Emacs could probably stand to have some more sugared ways to do some basic config out of the box — use-package is an example of improved keybinding, for example.

                                                            I wouldn’t mind a Rust-based runtime, but what I would really love to see is a Lisp-based runtime: a core in Common Lisp, with a full Elisp engine to run all the code from the last fourty-three years, and with full programmability in Common Lisp.

                                                            But then, I’d also like to see an updated Common Lisp which fixes things like case, pathnames, argument ordering and a few other small bits. And I want a pony.

                                                            1. 3

                                                              Lisp is probably one of the few things that makes emacs actually interesting, and I’m not even an emacs user or a lisp user…

                                                            2. 3

                                                              It literally has a keyword called unsafe that escapes all the safety that the language provides

                                                              This is not true, and this (very common) misunderstanding highlights the fact that one should think about – for lack of a better term – marketing, even when creating the syntax for a programming language. unsafe sure makes it sound like all the safety features are turned off, when all it does is allow you to dereference raw pointers and read/write to mutable static variables – all under the auspices of the borrow checker, that is still active.

                                                              1. 2

                                                                It is true. As soon as unsafe appears in a program all safety guarantees disappear. That’s literally why it’s called ‘unsafe’. The rust meme that it doesn’t take away all the guarantees ignores that.

                                                                It turns off enough safety guarantees that you no longer can guarantee safety…

                                                            1. 14

                                                              great talk just one note

                                                              Because, barring magic, you cannot send a function.

                                                              This is trivial in erlang, and even between nodes in a cluster, and commonly used, not some obscure language feature. So there we go I officially declare Erlang as Dark Magic.

                                                              1. 4

                                                                I suppose it’s not that “you cannot send a function”, but more like “you cannot send a closure, if the language allows references which may not resolve from other machines”. Common examples are mutable data (if we want both parties to see all edits), pointers, file handles, process handles, etc.

                                                                I’m not too familiar with Erlang, but I imagine many of these can be made resolvable if we’re able to forward requests to the originating machine.

                                                                1. 7

                                                                  It’s possible to implement this in Haskell, with precise control over how the closure gets de/serialized and what kinds of things can enter it. See the transient package for an example. This task is a great example of the things you can easily implement in a pure language, but very dangerous in impure ones.

                                                                2. 1

                                                                  I don’t know Erlang, but… I can speculate that it doesn’t actually send the functions. It sends their bytecode representation. Or a pointer to the relevant address, if the two computers are guaranteed to share code. I mean, the function has to be transformed into a piece of data somehow.

                                                                  1. 13

                                                                    it doesn’t actually send the functions. It sends their bytecode representation

                                                                    How is that different from not actually sending an integer, but sending its representation?

                                                                    In Erlang any term can be serialized, including functions, so sending a function to another process/node isn’t different from sending any other term. The nodes don’t need to share code.

                                                                    1> term_to_binary(fun (X) -> X + 1 end).
                                                                    <<131,112,0,0,2,241,1,174,37,189,114,105,121,227,76,88,
                                                                      139,139,101,146,181,186,175,0,0,0,7,0,0,...>>
                                                                    
                                                                    1. 1

                                                                      Would that also work if the function contains free variables?

                                                                      That is what’s the result of calling this function:

                                                                      fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                                                                      

                                                                      (Sorry for the pseudo-Erlang)

                                                                      1. 5

                                                                        Yep!

                                                                        1> F1 = fun(Y) -> term_to_binary(fun (X) -> X + Y end) end.
                                                                        #Fun<erl_eval.7.91303403>
                                                                        2> F2 = binary_to_term(F1(2)).
                                                                        #Fun<erl_eval.7.91303403>
                                                                        3> F2(3).
                                                                        5
                                                                        

                                                                        … or even

                                                                        1> SerializedF1 = term_to_binary(fun(Y) -> term_to_binary(fun (X) -> X + Y end) end).
                                                                        <<131,112,0,0,3,96,1,174,37,189,114,105,121,227,76,88,139,
                                                                          139,101,146,181,186,175,0,0,0,7,0,0,...>>
                                                                        2> F1 = binary_to_term(SerializedF1).                                           #Fun<erl_eval.7.91303403>
                                                                        3> F2 = binary_to_term(F1(2)).                                                  #Fun<erl_eval.7.91303403>
                                                                        4> F2(3).                                                                       5
                                                                        

                                                                        The format is documented here: http://erlang.org/doc/apps/erts/erl_ext_dist.html

                                                                      2. 0

                                                                        How is that different from not actually sending an integer

                                                                        Okay, it’s not. It’s just much more complicated. Sending an integer? Trivial. Sending a plain old data structure? Easy. Sending a whole graph? Doable. Sending code? Scary.

                                                                        Sure, if you have a bytecode compiler and eval, sending functions is a piece of cake. Good luck doing that however without explicit language support. In C for instance.

                                                                        1. 6

                                                                          You can do it, for instance, by sending a DLL file over a TCP connection and linking it into the application receiving it. It’s harder, it’s flakier, it’s platform-dependent, and it’s the sort of thing anyone sane will look at and say “okay but why though”. It’s just that Erlang is designed to be able to do that sort of thing easily, and accepts the tradeoffs necessary, and C/C++ are not.

                                                                          1. 3

                                                                            The Morris Worm of 1988 used a similar method to infect new hosts.

                                                                  1. 4

                                                                    Response from Stack Overflow’s Architecture Lead: https://meta.stackoverflow.com/a/386499/459877

                                                                    We are aware of it. We are not okay with it.

                                                                    1. 1

                                                                      And think of all the sites that are either not aware of it or completely okay with it. StackOverflow serves programmers, a group that is much more likely than the regular population to care or even just know about tracking. I doubt Facebook gives a damn about Google’s ads fingerprinting users–or even a small site like Bulbapedia (to pick a random example, no offense Bulbapedia.)

                                                                    1. 5

                                                                      I don’t think the author has enough experience using git to be dispensing advice on it.

                                                                      You use reset if you want to locally drop one or more commits, moving the changes back into the index (unless you use reset --hard, which is, in my experience, rarely what you want). You use reset to change the local ‘history’, which is not a problem if you do it locally to e.g. create better commits before pushing to remote. You should never [1] use reset to drop a commit that has already been pushed to a remote repo shared with others.

                                                                      You use revert if you need to ‘publish’ a revert of one or more commits that have already been pushed to a remote repo. revert is just a convenience command to append a new commit that exactly undoes one or more previous commits. It does not change the history.

                                                                      It’s recommended to use git revert instead of git reset in enterprise environment.

                                                                      It’s recommended you try to make sure you don’t need to use git revert. Use git reset as much as you want. If you seem to need a push -f, you’re doing it wrong. Don’t ever [1] do that.

                                                                      [1] For certain values of (n)ever. If you are absolutely sure you know what you’re doing, and your colleagues know as well, you can use push -f.

                                                                      1. 5

                                                                        You probably want the (unfortunately less ergonomic) --force-with-lease, not -f/--force, to avoid accidentally overwriting any changes that have happened after your last fetch, though.

                                                                        1. 1

                                                                          I didn’t even know this existed until magit made it visible in its menus. After some research, --force-with-lease is almost always the right answer (though, YMMV)

                                                                        2. 3

                                                                          I’m comfortable resetting and force pushing to a WIP feature branch, but that’s a privilege of our workflow.

                                                                          1. 4

                                                                            Another thing you can do is use git commit --fixup and then rebase autosquash at the end.

                                                                            I wonder if revert would be better understood if it were called “invert” instead.

                                                                            1. 2

                                                                              That looks useful! Thanks!

                                                                          2. 2

                                                                            Reset without --hard is rarely what I want. :)

                                                                            I use it for example when I merged in master and made a few mistakes resolving conflicts. To try a new merge, I reset hard. Well, I usually reset, then notice I forgot --hard, and then checkout.

                                                                            To clean up my local history, I usually do rebase -i instead of reset.

                                                                            I also use reset to undo a rebase. Usually when resolving conflicts failed. That might be my general theme: Use reset to make another try resolving conflicts.

                                                                            1. 1

                                                                              rebase -i is the best way to clean up commits (squash/drop/reorder/change comment) if you know how to use it well. sometimes though a reset –hard is nice to just drop changes that you don’t want but still have commits you like from the branch.

                                                                          1. 5

                                                                            We started using Buildkite in 2014, and I have only good things to say about it. I remember we jokingly called it Keith-as-a-service around the office in the beginning because Keith, their CTO, would always go above in beyond in helping us with any issues we had.

                                                                            1. 9

                                                                              This is exactly on the money. With Marzipan, the message from Apple can easily be interpreted as “we don’t want great Mac apps on the Mac, we want apps on the Mac”. The folks who wrote that Electron was a scourge should be considering this scourge alongside it.

                                                                              1. 14

                                                                                The folks who wrote that Electron was a scourge should be considering this scourge alongside it.

                                                                                Quote from the article that called Electron a scourge:

                                                                                Even Apple, of all companies, is shipping Mac apps with glaring un-Mac-like problems. The “Marzipan” apps on MacOS 10.14 Mojave — News, Home, Stocks, Voice Memos — are dreadfully bad apps. They’re functionally poor, and design-wise foreign-feeling. I honestly don’t understand how Apple decided it was OK to ship these apps.

                                                                              1. 4

                                                                                staged = diff --cached

                                                                                You can actually use --staged instead of --cached, which is much easier to remember. (You might still want to keep the alias, of course.)