I found this all on the internet, but don’t really remember where.
# git lg
git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
# git ll
git config --global alias.ll "log --pretty=format:'%C(yellow)%h%Cred%d\\ %Creset%s%Cblue\\ [%cn]' --decorate --numstat"
# Better diffs (comes from git contrib scripts)
git config --global pager.log 'diff-highlight | less'
git config --global pager.show 'diff-highlight | less'
git config --global pager.diff 'diff-highlight | less'
git config --global interactive.diffFilter diff-highlight
curl https://raw.githubusercontent.com/git/git/master/contrib/diff-highlight/diff-highlight > ~/bin/diff-highlight && chmod +x ~/bin/diff-highlight
Added images for the git lg and git ll output.
Since the focus through both articles seems to be on REST as a SPA API (not REST as an service-to-service communication API) I find it strange that the author waits until the closing thoughts of the second article to make a sideways glance at GraphQL (by mentioning Relay). Perhaps the timing of the article wasn’t right to address that topic yet, but I feel it would be an important part of the conversation today.
tldr; Use Yarn, ESLint, Jest, Webpack, code splitting, and hash static assets for caching.
Doesn’t get into how to do any of that beyond a superficial English description of one CI process.
Relevant to my current work, thanks!
I’d love to see a deeper cut: Is there type inference? Destructuring? Product and sum types? Can we see examples of the same thing using both tools? How about examples of things one can do that the other can’t?
I’d also love to see stronger opinions: Should I decide just on the basis of existing Angular/React? If I don’t use either, which tool should I pick? Does one have a noticeably larger, friendlier community?
I’ve been working a lot with TypeScript for the past year, so I can answer some of those.
Is there type inference?
Yes. Both Flow and TypeScript do type inference.
Destructuring?
TypeScript does destructuring. Flow doesn’t, since it’s just a type checker, not a compiler. But you’d usually pair Flow with Babel.
Product and sum types?
TypeScript has sum types. I’m not familiar with product types.
Should I decide just on the basis of existing Angular/React?
Maybe in the case of Angular, since it’s really a TypeScript-first framework. React is less opinionated, and TypeScript has really good support for React, including compiling JSX if desired.
If I don’t use either, which tool should I pick?
TypeScript is a fair bit older and more mature as far as I can tell.
Does one have a noticeably larger, friendlier community?
TypeScript is much larger.
The really big advantage that TypeScript has is the existing ecosystem of third party type definitions which you can install via npm.
Say you want to use LoDash, which isn’t written in TypeScript, but you want to have your usage type checked. Just npm install @types/lodash and you’ll get a community-maintained type definition which the TypeScript compiler will recognize, and automatically match up with the existing lodash JS package in your node_modules.
If I understand correctly, what TypeScript has is union types (as in set-theoretic unions), not sum types. The difference shows up when you try to union/sum a type with itself:
-- Haskell
data Sum = A Foo | B Foo
Ignoring bottom, Sum is the disjoint union of two copies of Foo.
// TypeScript
type Union = Foo | Foo
The union of Foo with itself is just Foo again.
typescript 2 has sum types: https://blog.mariusschulz.com/2016/11/03/typescript-2-0-tagged-union-types
Product types as covered by Wikipedia. tl,dr; Haskell/ML/Rust shit.
I suspect we probably recognize them by another name and concrete example, and I also suspect somebody here might be able to bridge the theory with the practice. (nudge nudge @pushcx)
To give a simple version, Sum types are OR while Product types are AND. Tuples count as Product types and typescript and flow support them.
So are product types the same as intersection types? They’re not like tuples, but do express an “and” relationship. https://www.typescriptlang.org/docs/handbook/advanced-types.html
As I understand it, product types are different from intersection types in that product types vary depending on the order of the operands of the product, i.e. “A + B” (where + is the ‘product type operator’) differs from “B + A”.
A struct in C is a classic example of a product type; if you include two sub-structs of the same type (A + A), the elements of each differ, whereas in intersection types A & A would be the same as just A itself, and A & B = B & A.
(I think. Correct me if I’m wrong, someone more knowledgeable; this is based on my interactions with product/sum types in ML-y things, and only a little bit of interaction with intersection types while helping a friend debug some TypeScript, but I have no real-world experience with the latter.)
There is so much confusion in this subthread that I feel compelled to correct some of it:
⊕), tensor products (⊗), direct products (×), unions (|) and intersections (&) are all different from each other.(A ⊕ B) ⊕ C and A ⊕ (B ⊕ C) are naturally isomorphic, A ⊕ B and B ⊕ A are naturally isomorphic, etc. None of them is idempotent.(A | B) | C and A | (B | C) are the same type, A | B and B | A are the same type, A | A and A are the same type, etc.A ⊗ (B ⊕ C) and (A ⊗ B) ⊕ (A ⊗ C) are naturally isomorphic.A & (B | C) and (A | B) & (A | C) are the same type, etc.A and B are isomorphic, and so are C and D, then so are A ⊕ C and B ⊕ D, etc. Category theorists call this “not being evil”.I think a plain old C struct is a product type, and tagged unions are sum types.
Edit: nevermind, I’m very late to the party. :)
I suggest this link for a deeper and more exemplified comparison: https://djcordhose.github.io/flow-vs-typescript/flow-typescript-2.html#/
I work mostly in Haskell or JavaScript although as I get more into data science I anticipate Python to take up more usage. ELisp because Emacs. With Haskell I start with types.
Emacs as my “IDE” unless I can’t make changes to a file “on save” as some open source projects don’t control for whitespace on the end of lines and such. If it’s not Emacs it’s probably Sublime Text.
ITerm2 (because Mac) and zsh are where I spend my time when I’m not in Emacs.
Pretty much everywhere I do work uses git in some form (GitHub, Phabricator, etc).
If I control a project end to end my stack looks like:
Because I bias towards starting with microservices I’m also pretty set on automating as much as possible. Infrakit is interesting in the “server automation” space since it can use Terraform to set up the servers. CI/CD, lerna-semantic-release for JS-based monorepo stuff, etc.
I feel like my workflow is different for each isolatable portion of the stack, so I won’t list them all since this is getting long :)
From the comments (with no response by the author):
I might be missing something but couldn’t your issues with Octo on Node.JS be solved simply by using the Cluster module native to Node ? From what you describe as your immediate advantage for using Golang that sounds like exactly what Cluster does for Node.
Can anyone comment?
Maybe in the sense that having more processes would’ve reduced load on individual processes but their issues were with inconsistent timeouts under load which wouldn’t have “gone away” using Cluster. I don’t think that their use of Cluster would have been much more effective than just deploying more instances.
No matter how hard I tried, I couldn’t get Octo to timeout properly and I couldn’t reduce the slow down during request spikes.
but if you look at the docs for setTimeout it is clear that there are no guarantees about the timing of callbacks.
The callback will likely not be invoked in precisely delay milliseconds. Node.js makes no guarantees about the exact timing of when callbacks will fire, nor of their ordering.
In their case, it seems switching to any language with more strict timeout capability would’ve had the same effect. I believe Go’s After is like this.
In general canceling work in Node is really difficult, due to asynchronous callbacks - you generally can’t access a function in between when you call it and when a callback gets hit. Go makes this much easier, at least if you are using a context.Context, the downstream work gets canceled as well - all it has to do is listen for an event on the ctx.Done channel. Note contexts are coming to the database/sql package in Go 1.8.
I wrote more about this here: https://kev.inburke.com/kevin/logrole-api-client-speed/
To clarify; once something is on the queue for the node event loop, it’s going to get processed eventually.
You can cancel outstanding requests before that happens (e.g. by scheduling a ‘cancel’ operation for 200ms in the future), but that pushes the ‘cancel’ operation onto the back of the event queue after 200ms. If your server is heavily loaded, that event might not get to the front of the queue until 400ms has passed.
By default Node runs a single thread in a single process. The cluster module lets you run more processes. It depends on where the bottleneck was in their system and whether each machine had additional capacity that wasn’t being utilized. If a single process is already using all of the RAM, or the machine only has a single CPU, it might not help much.
Writing a parser for the GraphQL simplified schema language for a Haskell GraphQL implementation. Schemas will be able to be written as such.
type Query {
hero(episode: Episode): Character
droid(id: ID!): Droid
}
type Character {
name: String
}
type Droid {
name: String
}
This is a departure from my previous approach which intended to use native Haskell data structures to construct a schema and deliver introspection information, etc. I believe this new approach to be more easily implemented in conjunction with the query language and resolvers. Then I might be able to layer a more interesting DX later.
I accepted an offer to work at Dropbox, so am taking November off to work on side projects and develop a running habit.
I’ve noticed a couple errors already, such as this line about the kinds of dependencies available. (There are actually dev, peer, bundled and optional in addition to regular dependencies)
There are 2 kinds of package dependencies, “dependencies” and “devDependencies”.
That combined with the complexity introduced in step 3 by using Gulp instead of package.json scripts makes me hesitant to recommend this guide to anyone. For example, this would be the comparison to the gulp config:
{
"scripts": {
"clean": "rm -r lib/*",
"build": "babel src --out-dir lib",
"watch": "babel -w src --out-dir lib"
"start": "node lib"
}
It seems to continue the complexity theme with the webpack config as well.
The author glances off some good ideas here (if you can get past the presentation) but I feel they come to the wrong conclusion. The conclusion seems to be “use rails like we did in 2006” with some other backing service(s). I really wouldn’t fault anyone for choosing this path especially if they don’t have a lot of modern UI experience but personally I’ll be sticking with my Universal React application (and mobile, etc UIs) which fetch from a GraphQL endpoint.
The way I’m suggesting to use Rails is completely different to what it was in 2006. Back then, they were the full stack, the database, logic, templates, etc., all went into the same package. What I’m proposing now is to just use Rails as the replacement for a SPA, while still keeping the back-end and front-end separation in place. The reason for this is that full-stack frameworks can offer a more pleasant developer experience (if you get rid of the cruft) compared to Gulp/Webpack, which is hell.
The point is, there’s an alternative to the current JS-React-Universal-SPA horror show, just because they aren’t making headlines, they aren’t stuck in the stone age. I’ve been developing quite nicely using a Scala backend and Rails frontend that uses Her to call the REST API and react-rails to render the UI using React and TypeScript. Granted, there are some non-AJAX calls here and there, and it won’t ever be as quick as a full-fledged SPA, but I have functional SEO, routing, asset pipelining, functioning dependency management, deployment, access to the Ruby ecosystem (not that JS is doing badly in that area).
The point is, there’s an alternative to the current JS-React-Universal-SPA horror show, just because they aren’t making headlines, they aren’t stuck in the stone age.
I respect your opinion of gulp/etc, but it’s disingenuous to call it a horror show. You really aren’t making any points against Universal JS apps and while I respect that your experience with Universal JS was poor that is not everyone’s experience.
What I’m proposing now is to just use Rails as the replacement for a SPA, while still keeping the back-end and front-end separation in place.
I agree that having a dedicated UI server is a good idea in many cases (in fact, it’s the same architecture I take in my Universal JS apps).
…I’ve been developing quite nicely using a Scala backend and Rails frontend… Granted, there are some non-AJAX calls here and there, and it won’t ever be as quick as a full-fledged SPA, but I have functional SEO, routing, asset pipelining, functioning dependency management, deployment, access to the Ruby ecosystem (not that JS is doing badly in that area).
I would have liked to read more about your modern Rails/Scala setup in this post. Honestly you could probably have cut out the entire history lesson part and just focused on why Scala/Rails is an awesome combination for modern web dev. I also encourage you to look into GraphQL as a replacement for REST for communication between your UI and backend. There’s a quite nice Scala implementation: https://github.com/sangria-graphql/sangria
I would have liked to read more about your modern Rails/Scala setup in this post.
Yeah, I’m working on an example that does what I mentioned.
Honestly you could probably have cut out the entire history lesson part and just focused on why Scala/Rails is an awesome combination for modern web dev.
Yeah, uh, about that. I needed something to put the solution into context, perhaps the execution wasn’t the best possible.
I take it you are using Sangria? What are your experiences?
I just got my static site generator working with “scaffolding plugins”, which means it’s only a bit of cleanup and some testing away from being able to support anything that compiles to JS and can render “universally. The first POC is going to be a site using Preact since it’s so similar to the first React/Relay scaffolding. Then I’m going to turn an eye to something a bit more "difficult” to integrate like PureScript and continue on with Vue.js, GHCJS and other examples.
The best part is that all of the “frontends” will have the same data-layer (GraphQL), so any advancements in data types, image processing, generating search indexes, etc can be shared by every single LEO site.
Yesterday 33 boats sailed into the port and 54 boats left it. Yesterday at noon there were 40 boats in the port. How many boats were yesterday evening still in the port?
I recognize this framing as the kind of “standardized test” question that you were expected to “fill in the blanks” on when I went through school. Given such a problem, an answer is expected even when the information contained in the question is too vague or incomplete to produce such an answer. If challenged, the most common result is to be marked wrong, told your reasoning was wrong and often also results in angry teachers. Often answering this type of question “correctly” would require looking at the multiple choice answers and deducing the proper framing from the available answers that seemed most correct.
Compare this to how a similar problem may be handled in a university mathematics education.
I remember on my linear algebra final, which was my last final of the quarter, that I forgot the method to solve one of the questions. After trying for some time to recall the mechanics of finding the solution, I opted to derive a best guess at how to get the answer, along with a short explanation of what I had done and what reasoning I used to derive that method. This answer was accepted for partial credit (I believe I got the answer slightly wrong, but was suitably close and provided sufficient explanation as to convince the professor that the problem was one of forgetfulness rather than lack of facility with the material).
The thing to note is that I understood myself to have the freedom to do that. The class was not such that I felt forced to use the method which had been prescribed, and it was clear the intent of the exam was not simply to get the right answer in the exact mechanical way the professor intended it to be had, but to illustrate that I had actually become literate in the language (the mathematical objects, axioms, theorems, and processes) of linear algebra.
intent of the exam was not simply to get the right answer in the exact mechanical way the professor intended it to be had, but to illustrate that I had actually become literate in the language
That was the core feature of my post-calc math courses in the process of my math minor.
Deconstructing the static site generator I built to enable it to run on multiple “backends” and arbitrary “frontends”. This will mean it is no longer tied to React/Relay and can use anything to render routes (pug, Preact, PureScript, etc). Step 1 is pulling out a package that turns a directory of files into a GraphQL API/Schema.
I use a CODE keyboard currently. My main complaint is that I don’t have rainbow LEDs for it. I also used to use a Kinesis Freestyle, but it just didn’t work out long-term.
Most pressing for me in Haskell is the lack of Hystrix/Zipkin-type libs. Circuit breakers, distributed tracing, etc. As much as the string issues are annoying, they really don’t hold me back. Having best in class “architectural” libs would make my life significantly better I think. That said, it’s the most pressing from my personal perspective, not that of a beginner, etc.
JavaScript… Perhaps Promises. I think Generators need a boost in awareness and redux-saga type libs so that code isn’t littered with Promise/Async-Await (which tend to cause errors to “disappear”).
for $work:
ShakeI’m finishing up the mvp on a Shake-based build system. The purpose is to isolate and run builds for a (multi-language) mono repo. We’re using containers to isolate the builds, but are giving a significant amount of trust to the first users of the system. It also has a responsibility to produce Docker artifacts. I’d appreciate being pointed to any papers in this area to inform future work.
UIOn the UI side, I’m introducing Flow to a Relay-based application as well as shipping a component development environment for an internal component library. My first thought for the dev env was carte-blanche, but it is using webpack betas and I don’t have time to dig that deep into the internals to fix it. Thus I landed on using react-storybook which has a slightly less satisfying “API” (sample below) but is much more consistently developed and I can depend on it to not break as often.
import React from 'react';
import { storiesOf, action } from '@kadira/storybook';
storiesOf('Button', module)
.add('with text', () => (
<button onClick={action('clicked')}>Hello Button</button>
))
.add('with some emoji', () => (
<button onClick={action('clicked')}>? ? ? ?</button>
));
for $not-work:
Superhuman RegistryBuilding a GADT to define the API to implement multiple backends for my docker-registry v2 implementation, much like the users package has done. The first major backend is going to be Postgres based and use the large-object support I’ve been prototyping.
personal siteI’m trying to put some time into designing my personal site now that it’s readily deployable via the static site generator that I wrote. Historically I’ve done design work in Photoshop but I’m trying to work in Sketch for this project. I also want to get the deployment onto CI and am toying with the idea of splitting the content out from the UI code. that will give me ~3 branches on GitHub. master for deployment, leo for the UI code and data for the markdown files.
I’ve been writing in JS for years and it’s been my main working language for the last.. 7 years? The thing I like the least about es6 is the whole fat arrow syntax. It makes things hard to scan, and that makes it bad code IMO. Sure, you save some keystrokes when writing it, but it’s easier to miss when you’re looking through someone else’s code. I don’t think the class syntax is that bad. It’s pretty easily readable.
The examples provided in the link are pretty abysmal and contrived. If you look at real world code using the class keyword, it’s much less goofy.
The fat arrow may take some getting used to but wait until you get to savor all those bytes you would’ve lost to function
I’m not sure if they do already, but I’m sure there are minifier flags you will be able to use that will convert them for you. You know, since they make it unreadable anyway :P
Not only that, but “this” sometimes doesn’t change (example: Promises IIRC or maybe it was sockets). I think they should’ve kept .bind/call/apply(this) instead of fat arrows too. They are thinking of replacing bind/call/apply with :: also.
They are thinking of replacing bind/call/apply with :: also
I did not know this, but it’s good. I was actually thinking of doing the same as a babel add-on, glad I had the same syntax in mind, although my semantics are likely different; I want foo::bar === foo::bar, which is exactly why bind is not IMO the correct solution.
Better to have new syntax such as fat-arrows for “sensible this” and “:: for fixed bind” than to change the behaviour of existing code in subtle ways.
Building a Docker Registry and blogging about it.
I’m pretty close to having a working image upload. The code handles the GET/POST/PATCH/PUT/HEAD sequence for resumable layer uploads, even if the logic is minimal (read:incomplete). I’m currently spending some time stepping through the requests coming from the Docker client. After layers, I’ll have to deal with manifests.
After having worked in both type systems Flow is the better option IMO. I’ve come to distrust TypeScript when considering whether the type system is covering my code appropriately. This makes it feel like using a very bulky documentation system when compared to Flow and leads to a more manual working relationship with the compiler to ensure correctly behaving code.