Threads for sebastien

  1. 13

    That’s a fairly complete overview, and kudos to the author for taking the time to lay out the historical background. I find that too often in tech we don’t take the time to put things in perspective, and take stock of where we’re coming from.

    1. 2

      The idea of using a compiler to convert declarative UI changes a la React to granular direct node changes is essentially the best of both worlds: high level code and efficient implementation. React has clearly redefined the standard for implementing UIs, and this approach removes pretty much all the cons. It sounds almost too good to be true!

      1. 4

        I typically need to manage infrastructure at at least three levels: pre-provisionning to acquire static resources like domain names an static IPs, provisioning to allocate and setup the baseline resources, and then runtime provisioning to create things like log groups or buckets, as well as scaling up and down. Step 1 is typically done manually, step 2 through Pulumi and 3 through Boto. To me that is where the gap is, and how I interpret this article: managing infrastructure is in practice a continuous process, even though our attention has been mostly focus on step 2. I do think that using a general purpose language is best, as you can leverage abstraction and composition primitives, and it is more likely to be a lightweight implementation (as opposed to Pulumi or CDK for instance). As soon as the infrastructure is dynamic, it needs to be managed actively, and the best way to do that IMO is to use code to define the behaviour and operators to unify the target state and the current state. I feel that we’re missing simpler solutions than what is mentioned in the article.

        1. 3

          Why do you do the first step manually rather than through terraform (or cdktf if you’re already using cdk)?

          1. 1

            I guess because we’re using Pulumi and not Terraform directly and we don’t want Pulumi to accidentally deprovison EIPs that we’ve whitelisted to access legacy services. It’s a clear gap, and I’m thinking about betters way to manage and automate static resources and runtime resources provisioning. That will need to be API driven as having to call Terraform from within a service would be quite inefficient.

            1. 2

              For the accidental removal protection, you could add some IAM policies which protect the specific resources either by tag or specific id. Then you have one “normal” infra role with protection and one “power user”. We do that for database instances.

              1. 1

                This seems like a good idea! Although I work in finance and the risk appetite is quite low, and the pain in terms of outage very high. Having to reprovision an interface resource because the permission or role was misconfigured is not something I’d like to go through ;)

        1. 4

          At $WORK we use Nix, and nix-shell in particular to provision reproducible environments for developers and for CI/CD pipelines. This allows anyone do just do a git clone and then make run to see the project running, and we have the assurance that this will run the same in a CI pipeline.

          We also use Nix for building container images, so that we get full reproducibility and granular control over our software supply chain.

          We don’t use Nix as our build system, though, for that we use Make, which I feel takes the core experience of shell scripting and adds declarative rules on top.

          Reading about Bazel, it would be well suited to replace Make. Maybe there is a way for Nix to grow in this area while streamlining its features and commands?

          1. 3

            So you use nix to ensure all dependencies are present but then let your project build “naturally”? If so, that’s sounds like an appealing approach to me. I’ve been put off using Nix because it seemed like I’d end deep down a rabbit hole reproducing build tooling that was already working.

            1. 3

              I had something similar in the previous two companies, and it was quite a pleasant experience. There were some rough edges when upgrading Haskell dependencies, but in general it worked quite smooth.

              One of the companies open-sourced the system we used back in the day: https://github.com/digital-asset/dev-env. This is a lazy-loading dev environment, where each tool would be fetched on the first use. Quite handy for monorepi, where one half uses Scala exclusively and the other half uses Haskell - neither has to wait for the tools from the other side to be downloaded.

              1. 1

                Yes, I use Nix to provision a reproducible and auditable toolchain, but use Make for building assets and automation. We also use Nix to package assets as a reproducible container, which is quite handy.

                1. 1

                  I use nixos for most of my current and new infrastructure, but I still maintain a bunch of “legacy” promxmox machines and debian vms via an older ansible setup.

                  The thing includes stuff like custom plugins for secret handling, cmdb interaction and so-on and uses ansible together with mitogen. A relatively simple flake.nix uses nixpkgs, poetry2nix and a devshell with a shellhook to set up the correct python environment, the password manager, a bunch of env vars, so that a simple nix develop starts a shell with evertything needed to deploy, update and maintain that ansible setup across machines (works on nixos and debian, and worked on macOS, but we don’t use that anymore).

              1. 3

                I like the way you present the features, the step-by-step approach with comparisons to JavaScript give a glimpse on the design decisions that you made creating Peacock.

                1. 10

                  I’ve found that posting mine publicly (https://www.jvt.me/salary/) has been hugely beneficial, both for me and to make it more visible to others what they should be asking for. No negatives found as part of the recent job hunt, either

                  1. 5

                    I feel like I’d need to check in with my spouse on this because the downside is every marketing and credit firm could find your salary history and target you as such. For example, just working at a big tech company gets me a lot of messages from money managers.

                    I love the idea of it and routinely tell folks in private what I make so they can negotiate better. I might at least create a harder to find page of it (thinking of a static page with appropriate headers and some tighter controls via CDN). I’m just not sure I want the things I literally block with DNS to have that data and want to try to figure out a way to prevent that.

                    1. 1

                      Thanks for sharing!

                      1. 1

                        You’re welcome! I save others’ I come across at https://www.jvt.me/tags/salary/ - got a few folks in the UK who’ve done it off the back of mine which is nice ☺

                      2. 1

                        You were earning more or less minimum wage when you started at Intel?

                        1. 1

                          Yep, as an intern, salaries aren’t great. At least we get paid in the UK, but it ain’t much 😅 ended up being the same amount (minus cost of living) as my student loan

                          1. 2

                            Ah, OK, just skimmed over it and I didn’t see you were an intern then :)

                            1. 1

                              Is that pro-rated? That looks to be in the right ballpark for what we pay for a three-month internship. The equivalent annual salary is four times that. Interns in the US are generally paid more but when I was at the university of Cambridge it was pretty common that PhD students who went on internships would earn more than their supervisors for the three months of the internship.

                              1. 2

                                Sorry by intern I mean a year long placement student. So yep, that was the salary across the year, not based on just a summer 🙃

                        1. 6

                          Everyone in the comments so far seems to be accepting without question the claim that using tabs for recipes was a mistake. I push back on that opinion pretty hard. I think from a language design perspective, using tabs makes a whole lot of sense.

                          I feel that much of the anger toward tabs is misdirected runoff from a legitimate complaint about make, its very poor error messages:

                          makefile:2: *** missing separator.  Stop.
                          

                          I don’t think there’s any excusing this. Distinct characters for recipe lines should make it easier to print helpful messages. Just tell me that I have a line which begins with a space.

                          Anyway, as is often the case, modern tooling makes make much easier to work with:

                          • syntax highlighting will show me when I have a problem line before even running it.
                          • editors can automatically handle working with tab characters (even invisibly to the programmer, if you want).
                          • if I really hated invisible characters, I could render tabs in a variety of ways.

                          My expressed opinion here is that the drawbacks of the tab character are exaggerated, and are outweighed by the benefits. Indentation is what the \t character exists for (and nothing else should be indented in a makefile). Because it is easier to parse, it cooperates better with tooling, which is especially important when it comes to accessibility tools like screen-readers.

                          Change my view :^)

                          1. 4

                            I agree the error messages are terrible, but you shouldn’t have to run the program to be able to spot that it has a syntax error in it. If you can have two files that are indistinguishable when viewed with cat but one is valid and the other has errors, you’ve made a terrible mistake as a language designer. Yes, you can teach your editor to be smarter about it, but there’s no reason that should be required, especially when the decision to use tabs doesn’t convey any benefit to the end user.

                            1. 2

                              A lot of the hate for tabs comes from not having it visible in text editors by default. I’ve been forced into using spaces in Python through autoformatting tools, and now need vertical guides to make sure I’m not one space off in my narrow font.

                            1. 4

                              Made me think of this similar project https://github.com/empirical-soft/empirical-lang

                              1. 3

                                I think the example is not ideal, as it’s pretty much the same code (semantically) with different syntaxes. A better example would be code written with for/while vs code written with map/filter/reduce and recursion. The latter functional style is encouraged on the front-end (React), but will very likely have a significant overhead compared to the less elegant imperative style. That would be something interesting to quantify.

                                1. 5

                                  Looks useful, it’s a nice compromise between XML and something more like TOML. It seems very similar to Tree Notation; it could even be an alternate, indentation-insensitive way to write Tree Notation.

                                  I agree that SML is a bad name. Standard ML is a well-known language, and confusion is inevitable. You could keep “Simple Markup Language”, and use a different acronym, maybe “SiML”?

                                  1. 2

                                    I was going to make the same comment on Tree notation, although I find the unfortunately named SML clearer to understand based on its documentation and easier to read. I’d let go of the End, though.

                                  1. 17

                                    It’s a bit of shame that Mercurial didn’t prevail, as it does solves most of the usability and model limits. See in particular Mercurial’s histedit extension to safely rewrite commits. Git seems to have sedimented deeply in our toolchain, and I think it’s here to stay for a while, the hope being that the porcelain (UI) gets completely reworked while keeping compatibility with current workflows and tools (ie GitHub).

                                    1. 8

                                      If you have had a chance to look at the Evolve extension. I actively worked on histedit and found it once Mercurial’s weakest part (as well as git rebase) as it require users to retain state in their mind between commands. Evolve goes beyond that by smartly tracking not just the history, but the changes between history (commit a moved to become b, and so on). As a result, a lot of amending, moving, etc of commits can be done by one command as evolve keeps track of what needs to follow.

                                      To illustrate in the simplest way: You can just go back to any non-public commit (another amazing concept in Mercurial), amend it, and then use evolve to automatically rebase the previous commits on top of it, without the user having to continously stay in a “rebase” state. Of course you still need that one more command to run at the end, but i found it much easier to reason about.

                                      1. 7

                                        I was a Mercurial hold-out for the longest time, and Evolve was a nice thing to boast about, although I didn’t use it that much.

                                        However, after discovering Magit (Emacs), I finally switched all my remaining personal repos to Git, firstly because far more people and services know/support Git, and second because with Magit I was getting a much nicer UI than either command line hg or any other UI I could find for Mercurial.

                                        In particular, while Mercurial’s Evolve is cool, with Magit rebases and “instant fixups”, which I do a lot, are now very fast and pretty painless most of the time.

                                        1. 3

                                          Magit is indeed wonderful. I wish I could move to Emacs but I am stuck in vim and/or helix.

                                          In the end I also switched off mercurial , mostly due to convenience and just how dominant GitHub & Co got. Still lucky to have it at work (although highly modified)

                                          1. 4

                                            Curious what keeps you stuck.

                                            Also you can use magit without using emacs as your editor.

                                        2. 2

                                          Yes, I did try to use evolve when I was using Mercurial for my main workflow, but somehow never managed to wrap my head around it and integrated it. I mostly used histedit to squash and cherry pick commits, which I found pretty good for my needs. What made me quit Mercurial was that I was using hg-git (with Github, as Bitbucket was winding down Mercurial support and sr.ht wasn’t there at the time), and after an update I was unable to clone repositories anymore, due a to a low-level format change.

                                          1. 1

                                            Are you describing hg up, hg amend, and hg restack? Or is Evolve some whole other thing?

                                            1. 1

                                              I think hg restack is a Meta/FB thing, but I think it’s based on the same tracking of how commits are rewritten as Evolve uses, and I think they provide similar workflows. https://www.mercurial-scm.org/doc/evolution/ has information about the Evolve extension.

                                              1. 1

                                                Just read the guide. Yeah restack and evolve look the same to me. The only difference I noticed is that Meta makes restack an automatic operation after amend.

                                          2. 5

                                            Git seems to have sedimented deeply in our toolchain, and I think it’s here to stay for a while, the hope being that the porcelain (UI) gets completely reworked while keeping compatibility with current workflows and tools (ie GitHub).

                                            I hear this loud and clear. Thank you for your comment, it’s a great point considering the immaturity of, say, Pijul’s Next software/service.

                                          1. 9

                                            It’s almost as if Vim was taking the Microsoft approach with its competitor, NeoVim, by going the proprietary route. Either the NeoVim community invests in supporting the new VimScript, or the gap between the fork and the original is going to widen to a point of rupture. Maybe that’s for the best? I switched to NeoVim a few years ago, and since then have been quite puzzled by Vim’s strategy.

                                            1. 5

                                              https://vimhelp.org/vim9.txt.html#vim9-rationale

                                              Or maybe it’s the stated rationale. I won’t reiterate every point because you can read it for yourself. However my feeling is that if neovim had been more ambitious and found a way to host typescript instead of lua then maybe vim would have just adopted it too (I’m assuming v8 would meet the perf goals).

                                              1. 3

                                                V8 is much bigger than LuaJIT and IIRC for a long time LuaJIT was faster than V8 (maybe it still is) and it worked much better wrt FFI than Node.

                                            1. 2

                                              A little more on the methodology

                                              The output we see in the Medium post is, ignoring a few telltales that throw the whole thing into question, pretty darn impressive. But what’s included in this interview is cherry-picked from a number of different interviews, and the actual prompts have been edited.

                                              1. 3

                                                Link to the actual interview document: here

                                                1. 1

                                                  Nice, where did you find that?

                                                  1. 1

                                                    The Twitter link was to a post that provided it. I just wanted people to be able to skip Twitter and go straight to the content.

                                                2. 1

                                                  I did find some of the questions to be formulated in a way that contained the prompt for the expected answer. A little bit like a cop asking “where you at John’s house on November 23rd?”. I also think they should have validated basic reasoning capabilities, such as testing memory (the transcript mentions previous conversations, but that may be purely rethorical). Teaching the AI a new language that it has no data about simply by interaction would be a good test, for example.

                                                1. 3

                                                  The headline doesn’t capture what’s cool about this, which is that mypyc uses type hints to generate and compile C code for big speedups over regular interpreted Python.

                                                  1. 3

                                                    It does make you wonder how much something like this could be built into future versions of Python as part of the pyc compilation process.

                                                    1. 2

                                                      Guido went back from retirement to make Python faster, I’m pretty sure this would be in line with these efforts. There’s a bright future ahead!

                                                  1. 3

                                                    Some previous discussion on HN: https://news.ycombinator.com/item?id=30218954

                                                    It’s not clear to me what advantages, if any, this encoding has over the current semi-standard of Protocol Buffers (+ clones such as Thrift).

                                                    1. 1

                                                      I am also unsure where this fits in with other security-focused formats such as Saltpack and more general efforts at future-proofing attempts such as Multiformats.

                                                      1. 1

                                                        Likewise, it was not clear to me why this would be more secure than pretty much any other format. There’s a bit of a disconnect between the goals and the result.

                                                        1. 4

                                                          It’s pretty strict about formats and representable values, and specifies a bunch of types not natively represented in JSON, so it’d be less prone to issues caused by ambiguous parsing/validation, which has been the cause of some major security holes.

                                                          But while I like the approach, especially having both a binary and text encoding, I think it tries to do too much. My enthusiasm waned the further I read down the spec. I get the feeling it will be quite a bit of work to implement, which weighs against its getting much use. Part of the appeal of JSON, outside JavaScript, is that it’s really easy to write a codec.

                                                          1. 3

                                                            When I got to the part about graphs and trees, that felt like a really bad scope creep.

                                                    1. 6

                                                      The more I do threading… the more I like processes.

                                                      1. 3

                                                        This is one of the many reasons I love Rust, threading is safe, so you get the benefits of shared address space without the downsides.

                                                        1. 4

                                                          so you get the benefits of shared borrowed address space

                                                          Tiny adjustment to your statement that doesn’t invalidate what you’re saying…

                                                        2. 3

                                                          Likewise, especially when I realised that there’s no way to kill a system thread properly, and only hacks to kill a Python thread, none of which works if your thread is blocking on a system call.

                                                        1. 2

                                                          Deno should help solve a lot of that, its API is close to the Web API, which encourages code sharing and reuse between client and server.

                                                          1. 6

                                                            The degree to which this is appealing to people has always puzzled me a bit. What code are you looking to share across client and server?

                                                            With the caveat that I personally do very little frontend development, nearly all the web-based systems I’ve worked on had only a tiny number of things that I could see would be valuable to share between client and server. Wire formats and validation logic are the two categories that come to mind.

                                                            For wire formats, you can use OpenAPI or gRPC or Thrift or whatever to share across client and server without even requiring the two sides to be written in the same language, and some of those tools will give you API versioning and other features too.

                                                            Validation logic totally makes sense to want to share, but it’s usually not a lot of code and optimizing for sharing it seems like it has limited benefit.

                                                            What else?

                                                            1. 3

                                                              if you have an existing SPA you can render most of it on the server-side then shuffle it back to the browser for display. then the browser loads the rendered state from the server and starts pulling in any other dynamic bits from your API in the background.

                                                              the biggest appeal seems to be the ability to re-use your existing SPA without rearchitecting everything from scratch. that said, I agree with OP in that this makes things pretty complicated.

                                                              1. 1

                                                                My personal use case, for instance, is that I do a lot of prototyping in Observable, a JavaScript Web-based notebook editor. Over the years I accumulated quite a bit of code (see for instance my personal standard library https://observablehq.com/@sebastien/boilerplate), and would like to run it outside of the just the browser. I really like the fact that the code can run from any browser, but that with tools like Deno and a bit of glue code I can run my most my notebooks standalone. It gives a lot of freedom and the ability to better reuse code across application domains

                                                            1. 4

                                                              I think that Bash and Shellcheck may yield a better result than Fish. Fish is an awesome interactive shell, but I wish it was a Bash-compatible syntax. Maybe consider Oil or NuShell as well if you’re looking for an alternative?

                                                              1. 3

                                                                The documentation seems really well structured and written, and there seems to be a focus on software architecture at the language design level, which I think is worth noting. Now what I want to know is how well it performs compared to say, Python and Go.

                                                                1. 11

                                                                  I think people mistake make for a task runner when it’s actually a (very basic) build system which has some options that make it somewhat viable as a task runner. I don’t have resources at hand for getting people started on how to use make “the right way”, but one neat trick that might get you started is knowing that you can do this without even having a Makefile:

                                                                  $ printf '#include <stdio.h>\nint main(void) { puts("wah!"); return 0; }' > wah.c
                                                                  $ make wah.o wah
                                                                  cc    -c -o wah.o wah.c
                                                                  cc   wah.o   -o wah
                                                                  $ ./wah
                                                                  wah!
                                                                  

                                                                  That said, make has a lot of footguns, and with most programming languages you’re gonna be using a language-specific build system that keeps track of all the complexities you’d run into if you were to write the Makefiles yourself, so using a shell script or an actual task runner will serve you better than make.

                                                                  1. 12

                                                                    +1. Make is not a task runner.

                                                                    Rule of thumb: any make target that doesn’t become a no-op if you run it twice in a row is better expressed as a shell script. make run &c. is a clear red flag.

                                                                    1. 3

                                                                      Yup agreed, another issue is that most such make targets should be .PHONY, but this bug is easy to miss.

                                                                      i.e. touch mytarget then the Makefile will no longer “work” as expected. People rely on the file not existing

                                                                      1. 2

                                                                        I would actually disagree on that. I use Makefiles as the default entry point to do anything with my software (building, packaging, running, provisioning, deploying, testing, releasing). The key problem is that Makefiles do not clearly make the distinction between asset-producing rules and automation rules (.PHONY). I find it makes it easy for others to use, even when they’re not familiar with the underlying stack. Make run, make build, make dist, make test, etc is pretty straightforward to use. I also use make to provision Nix development environments, and now have reached the holy grail of “make run” working on any Linux box after a fresh clone of the git repo and no extra configuration.

                                                                        1. 2

                                                                          The key problem is that Makefiles do not clearly make the distinction between asset-producing rules and automation rules (.PHONY).

                                                                          Well, I guess I would say that make is fundamentally a tool for asset producing rules, and while you can bend a few features (like .PHONY) to make it do other things, that’s not really what it’s been built for.

                                                                      2. 5

                                                                        Yes. Underneath the arcane language lies a delightful paradigm – if it’s any point in using Make, it must be this: To just describe the dependency tree for the machine, let the machine figure out what needs to be rebuilt when inputs change, and profit massively by only rebuilding what needs to, with implicit parallellism even.