I’ve been trying to develop a side hobby of building guitars. This weekend I’m working on a reproduction of a 6th century Germanic instrument called a lyre. My biggest challenge over the weekend will be gluing the two halves of the soundboard together, which so far I’ve been failing at. It requires planing two 24” long boards along a 1/8” wide edge to get a perfectly flat seam. It has led down a pretty deep rabbit hole of learning about hand planes and blade sharpening, but it’s all pretty rewarding.
In 2015 we dropped Grunt and Gulp in two projects I was leading in favour of npm scripts and that decision has worked out well for us. Both projects are for front-end applications with node api servers. We support live reloading in dev, pull down translations from transifex during builds, generate language-specific js bundles, and can build our native (Cordova) apps for iOS and Android all from our npm scripts. Everything you’d expect do to in any other build tool.
What I prefer about plain npm scripts is that there’s no tool-specific gotchas or knowledge required to make improvements to our build system. Well, that’s not entirely true because we use Webpack for our front-end bundles. Aside from Webpack, then, if you can write node.js, you can add new build commands without having to wrap your head around the syntax for other build systems. This saves us some effort when onboarding new employees, and I like that I don’t have to keep going back to the Grunt or Gulp docs to figure out how to do something new.
Another great plus for me is that we no longer have to install any npm packages globally, so it’s really easy for us to change Cordova versions between projects or testing new versions. Just git checkout and yarn install. The npm scripts can run Cordova commands as if it was installed globally, so we don’t have any issues from accidentally building our apps with the wrong version of Cordova anymore.
It depends on what you’re trying to deploy and what constraints you have; there isn’t one magic bullet. One pipeline I was especially proud of for a Python app I wrote at Fog Creek worked like this:
1a2b3c4d. We used Mercurial, not Git, so the command was hg archive -t tbz2 -R /path/to/bare/repo -r 1a2b3c4d, but you can do the same in Git./srv/apps/myapp/1a2b3c4d
requirements.txt, make a new virtualenv if necessary in /srv/virtualenvs/<sha1 of requirements.txt> on each server hosting the app./srv/config/myapp/1a2b3c4d. Configs are generally stored outside the actual app repo for security reasons, even in a company that otherwise uses monolithic repositories, so the SHA here matches the version of the app designed to consume this config info, not the SHA of the config info itself. (Which should also make sense intuitively, since you may need to reconfigure a running app without deploying a new version.)1a2b3c4d.myapp.server.internal, that serves myapp at revision 1a2b3c4d.default.myapp.server.internal to point to 1a2b3c4d.myapp.server.internal and rerun tests.default.myapp.server.internal back to the old version.Now, that’s great for an app that’s theoretically aiming for five-nines uptime and full replacability. But the deploy process for my blog is ultimately really just rsync -avz --delete. It really just comes down to what you’re trying to do and what your constraints are.
I doubt you’ll find consistent views, which makes that the opposite of a stupid question.
My ideal deployment pipeline looks something like the following:
Note that I’m coming at this from the server-side perspective; “deploy into” means something different for desktop/client software, but I think the overall flow should still work (though I’ve never professionally developed client software, so I don’t know for sure).
We do:
hg archive -r 1a2b3c4d build/
docker build -t $(JOB_NAME):$(BUILD_NUMBER) . and push it to internal docker registry.sed -e "s/@@BUILD_NUMBER@@/$(BUILD_NUMBER)/g" $(JOB_NAME).nomad.sed >$(JOB_NAME).nomad.Nomad will handle dumping vault secrets, config information, etc from the template directive in the config file. So Configuration happens outside of the repo, and lives in Vault and Consul.
You can tell by the env. variables we use Jenkins :) different CI/CD systems will have different variables probably. If unfamiliar with Jenkins, BUILD_NUMBER is just a integer count of how many builds jenkins has done for that job. JOB_NAME is just the name you gave it inside of Jenkins for this job.
This is way off topic, but I’d love to hear why you went with Nomad and how it’s been working for you. It seems to fill the same niche as Kubernetes, but I hear practically nothing about it—even at shops using Packer, Terraform, and other Hashicorp products.
We started with Nomad before Kubernetes was a huge thing, i.e. we heard about Nomad first. But I wouldn’t change that decision now, looking back. Kubernetes is complicated. Operationally it’s a giant pain. I mean it’s awesome, but it’s a maintenance burden. Nomad is operationally simple.
Also Nomad runs things outside of docker just fine, so we can effectively replace supervisor, runit, systemd, etc with Nomad. Not that I remotely suggest actually replacing systemd/PID 1 with Nomad, but that all the daemons and services you normally run on top of your box can be put under nomad, so you have 1 way of deploying, regardless of how it runs. I.e. Postgres tends to work better on bare hardware, since it’s very resource intensive, but with the Nomad exec driver it runs on bare hardware under Nomad perfectly fine, and gives us 1 place to handle logs, service discovery, process management, etc. I think maybe the newer versions of Kubernete’s can sort of do that now, but I don’t think it’s remotely easy, but I don’t really keep up.
But mostly the maintenance burden. I’ve never heard anyone say Kubernetes is easy to setup or babysit. Nomad is ridiculously easy to babysit. It’s the same reason Go is popular, it’s a fairly boring, simple language complexity wise. This is it’s main feature.
Do the same as previous step but for the environment we will be running in (dev, test, prod) if required.
Could you elaborate on this step? This is the on that confuses me the most all the time…
Inside of Jenkins job config we have an ENV variable called MODE and it is an enum, one of: dev, test, prod
Maybe you can derive it from the job-name, but the point is you need 1 place to define if it will run in dev/test/prod mode.
So if I NEED to build differently for dev, test or prod (say for new dependencies coming in or something, I can.
That same MODE env variable is pushed into the nomad config: env { MODE = “dev” } It’s put there by sed, identically to how I put the $(BUILD_NUMBER).
And also, if there are config changes needed to the nomad config file based on environment, say the template needs to change to pull from the ‘dev’ config store instead of the ‘prod’ config store, or if it gets a development vault policy instead of a production one, etc. I also do these with sed, but you could use consul-template, or some other templating language if one wanted. Why sed? because it’s always there and very reliable, it’s had 40 years of battle testing.
So that when the nomad job starts, it will be in the processes environment. The program can then, if needed act based on the mode in which it’s running. Like say turning on feature flags under testing or something.
Obviously all of these mode specific changes should be done sparingly, you want dev, test, prod to behave as identically as possible, but there are always gotchas here and there.
Let me know if you have further questions!
What does a great deployment pipeline look like?
I do a “git push” from the development box into a test repo on the server. There, a post-update hook checks out the files and does any other required operation, after which is runs some quick tests. If those tests pass, the hook pushes to the production repo where another post-update hooks does the needful, including a true graceful reload of the application servers.
If those tests fail, I get an email and the buggy code doesn’t get into production. The fact that no other developer can push their code into production while the codebase is buggy is considered a feature.
Since I expect continuous integration to look like my setup, I don’t see the point of out-of-band testing that tells you that the code that reached production a few minutes ago is broken.
The setup we use is not even advanced, but simply resilient against all the annoyances we’ve encountered over time running in production.
I don’t really understand the description underneath “the right pattern” of the article. It seems weird to have a deploy tree you reuse everytime?
Make a clean checkout everytime. You can still use a local git mirror to save on data fetched. Jenkins does this right, as long as you add the cleanup step in checkout behaviour.
From there, build a package, and describe the environment it runs in as best as possible. Or just make fewer assumptions about the environment.
This is where we use a lot of Docker. The learning curve is steep, it’s not always easy, there are trade offs. But it forces you to think about your env, and the versions of your code are already nicely contained in the image.
(Another common path is unpacking in subdirs of a ‘versions’ dir, then having a ‘current’ symlink you can swap. I believe this is what Capistrano does, mentioned in the article. Expect trouble if you’re deploying PHP.)
I’ll also agree with the article that you should be able to identify what you deploy. Stick something produced by git describe in a ‘version’ file at your package root.
Maybe I’m missing a lot here, but I consider it project specific details you just have to wrestle with, in order to find what works. I’ve yet to find a reason to look into more fancy stuff like Kubernetes and whatnot.
I think that the point kaiju’s thread wants to make is that you shouldn’t be deploying from your local machine, since every developers environment will differ slightly and those artifacts might cause a bad build when sent to production. I believe the normal way is to have the shared repo server build/test and deploy on a push hook, so that the environment is the same each time.
This article ( http://hhvm.com/blog/2017/09/18/the-future-of-hhvm.html) is quite recent, and refers back to the link here, so I’m guessing this is what prompted this post?
I play guitar and bass. Or really, I learn to play guitar and bass. I played saxophone for 10 years previously, and by comparison there seems to be so much to learn about and around guitar that I’ll be a perpetual student. This might be a long shot, but if there are any other musicians here from the Toronto area, I’d be down to meet up and jam.
The other thing I’m interested in is climate change. A couple months ago I decided that understanding and taking some action against climate change is important to me. Right now, I’m doing a lot of research and reading to try and understand what kinds of problems I might be able to help with.
I’d love to hear your conclusions from your climate-change impact investigations whenever you’ve made them :) This is also something I’m interested in.
For sure! One of the things that I’ve noticed from investigations so far is that there are a lot of people who care about this but don’t know what to do. I’m following a 100:10:1 process right now, generating a big list of ideas, then picking a smaller subset to research in depth.
An interesting place to start might be this post from Bret Victor: http://worrydream.com/#!/ClimateChange
We recently had a potential client approach our sales team about purchasing our SaaS product, but the client had a hard requirement that we worked on IPv6-only. We were IPv4-only at the time, so we decided to investigate how easy it would be to add v6 support. Turns out, it’s a mixed bag.
It was easy enough to add v6 support to our application - all we needed to do was add a AAAA record so that our domain properly resolved to the v6 address of our load balancer. But a number of the external services that we depend on haven’t migrated yet, so our application won’t work properly unless we can convince them to migrate as well, or we add a proxy layer to our stack.
And it turned out that testing our changes was a challenge because the ISP for our office doesn’t support IPv6. In fact, my ISP at home doesn’t support v6 either. We also tried from a BrowserStack environment, but their VMs are IPv4-only as well. (One thing we have yet to try is an Amazon Workspace.)
In the end, we were able to test when one of our developers noticed that his phone supported IPv6. We created a wi-fi hotspot on the phone, connected to that, and turned off IPv4 on the test machine. But wow, I’m surprised at how far away the world still seems to be from supporting IPv6.
As another data point, when I was at RIM, we built the first touchscreen BlackBerry - the much maligned Storm - in 9 or 10 months, if I recall correctly. A normal dev cycle for a device at that time was about 12 months.
The reason for the rush on the Storm was because the iPhone was an AT&T exclusive, and Verizon was pressuring us to make an iPhone competitor for them as soon as possible.
My favorite tactic for “killing” these is (to use the example from the post):
# e.g. "hello everyone" => "Hello Everyone"
def upcase_words(sentence)
sentence.split(' ').map!{|x| x = x[0..0].upcase << x[1..-1]}.join(' ')
end
In an ideal world the name is clear enough that someone reading the code at the call site understands what’s happening, and if they don’t the example alongside the definition hopefully gets them there.
You mean
# e.g. "col1\tcol2\n ^ woah" => "Col1 Col2 ^ Woah"
Naming it hurts in this case, because the function does not do what you named it (e.g. in a string of tab-separated values, or a string where multiple spaces are used for formatting). If you had to name it, it would be better named as split_on_whitespace_then_upcase_first_letter_and_join or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.
The best solution is one that embodies exactly what you intend for it to do, i.e. substitute the first letter of each word with the upper case version of itself. In Ruby, that would be:
sentence.gsub(/(\b.)/) { |x| x.upcase }
If you had to name it, it would be better named as splitonwhitespacethenupcasefirstletterandjoin or leave it unnamed and hope that everyone on your team knows that split in Ruby doesn’t work as expected.
I disagree. You should name functions and methods based on what they’re supposed to do. If it does something else, then everyone can see it is a bug.
I don’t agree with your naming system. I think the name of your function should describe what it does instead of how it does it. If your function name describes how it’s implemented, you have a leaky abstraction.
Among other benefits, giving it a name means we can explode the code without worrying about a few extra lines in the middle of the caller.
words = sentence.split ' '
words.each { |w| w[0] = w[0].upcase }
sentence = words.join ' '
Introducing a variable called ‘words’ is a solid hint about the unit we’re working with. We may not want to pollute the caller with a new variable, but in a subroutine that’s not a problem.
Naming it does help in this case, but mostly because the reader no longer has to scrutinize over what it’s actually doing. Isn’t this sort of like polishing a turd?
That only masks the issue.
Any maintenance on that line will still have the same problems, whereas refactoring it to split it up into smaller segments AND giving it a name avoids that issue.
It gives the reader a good frame of reference to what the function’s doing. Context helps a lot when trying to read code, and although this isn’t as readable as it could be yet, it’s definitely a lot more readable than minus the function signature.
I’m continuing trying to teach myself about web performance testing. It’s weird that there are tons of blog posts and books about unit testing, integration testing, etc, but almost nothing I can find about performance. Anyway, the big breakthrough last week was a one line change. We were already using New Relic for monitoring, and just updating from a year-old version of their node library gave me a bunch of more detailed measurements that more accurately detail where our app spends most of its time.
Outside work, I’m still prepping for my talk at the local WebGL meetup. Last year I read Robert McKee’s book about screenwriting, Story, so I’m curious to see if applying some of those ideas to my presentation makes a more compelling talk.
Might be worth checking out Gatling for web performance testing. I’ve had good experience with it.
Of these books I’ve only read “Silence on the Wire” by Michal Zalewski. But that one alone I think is worth the $15 minimum of the bundle. It’s all about side-channel attacks and discusses things like timing attacks, monitoring network traffic from the blinking LED on the back of the NIC, or even reconstructing passwords from an audio recoding of keyboard strokes.
bunnie’s book “Hacking the Xbox” is a classic, though he’s also made it available for free here: http://bunniefoo.com/nostarch/HackingTheXbox_Free.pdf
I’d also like to finally read “Hacking: The Art of Exploitation” and “The Smart Girl’s Guide to Privacy” - two books I’ve heard good things about, but never did get around to reading.
The rest of the books I haven’t heard of, but just those 4 I mentioned are probably worth your time.
It’s pay what you want for the first set of books, or $15 for all of them. Heck, I feel like I shouldn’t say this, and I don’t know you, but if you really have <$15 and promise to read at least part of each book I’ll buy you the whole bundle if you contact me privately.
Thank you for the kind offer. It really makes me feel like the world is a better place with so many generous people. I was sent a bundle in the mail, and I will definitely read every book. I have been dealing with some serious health issues and it is a struggle financially.
Some of the books are released under the Creative Commons and are available online, released for free legally by their authors/publishers. I know Automate the Boring Stuff (https://automatetheboringstuff.com/) and Hacking the Xbox (https://www.nostarch.com/xboxfree) are among those.
Thanks for the links! I am an advocate of free culture and it is great to know that more people are releasing works as CC.
First time posting in this thread for me!
At work, I’m trying to give myself a crash course in web application performance measurement, focusing mainly on backend for now. I’m really starting from zero here. I know vaguely that I want to know how long the backend takes to respond to each request, and I’d like to know how long the requests to each of the endpoint’s dependencies take. I’d also like to figure out what specifically causes the exponential increase in performance times as the API servers reach maximum capacity, and where the bottleneck comes from. It seems that there are almost no resources around for explaining best practices for making these kinds of measurements.
Outside of work, I’m busy preparing for a talk I’ll be giving at my city’s WebGL meetup next week. I’m working on documenting and refactoring a project that I wrote last year, which I’ll be using to explain how I created some of the effects I used.
Working on my multiplexing keyboard (should be able to be plugged into multiple computers, and switch between them, but also act as a usb hub for devices like mice to switch with the keyboard and data drives to act as a netshare between all connected computers). The design is split/ortholinear but each module has a microcontroller sitting in it so you can use as many as you want. I finished the file for cutting out the faceplate (https://gist.github.com/distransient/cbd4c9366d8084d01b4ca5e24182c00d) and now I’m working on figuring out what material I’m going to use to prototype with and what controllers I want to source.
That sounds really cool! I’d consider buying a product like that for my workbench at home where I find myself regularly switching between my desktop, laptop, RasPi, and Novena.
As a ‘dumb’ guy - at least in math - this is a story I wish I could have shared with my younger self. I think I was a lot like the second person in the story, who silently struggled and was too afraid to ask for help with the basics. I never could end up getting though the material, and never finished my degree. The terrible thing is that I think that anxiety leads to a vicious spiral. The worse you do, the worse you feel, and the harder it is to ask for help.
Things have turned out alright for me in the end, but I’ll always wonder “what if…?”
I composed an almost identical reply. It’s interesting how apparently this is a rather common story. In my case, this experience, oft repeated during my college career, has given me an almost neurotic relationship to the classroom that haunts me to this day.
I use Leuchtturm notebook: http://www.amazon.com/Leuchtturm-Medium-Notebook-Squared-LBL12/dp/B002CV5H4Y
The pages are numbered, and comes with a blank index at the beginning. Its pretty awesome.
As for my “random note during the day”, I use Remember the Milk on my phone https://www.rememberthemilk.com. They have a single icon widget that lets you instantly jot down anything that gets added to a list for future review.
I’m glad you did post about your book! I made a New Year’s Resolution to learn about Haskell this year, so more learning materials are perfect.
Are there any lobste.rs here that have read an earlier release of this book and could compare it to Learn You a Haskell for Great Good - the book I had been planning to learn from?
Yes, it is night and day better in my opinion. LYAH is OK, its not bad by any means but honestly I didn’t learn much from it.
I’d put it in as a nice toe dipping book.
I’m biased, but here’s a post I wrote about learning materials shortly after I started the book: http://bitemyapp.com/posts/2014-12-31-functional-education.html
I’ve given a talk titled “How to learn Haskell in less than five years” as well: https://www.youtube.com/watch?v=Bg9ccYzMbxc
Yeah, the main problem with LYAH is the lack of exercices. I read the first few chapters of it but did not retain much, you can use it as a quick read on the subway though.
Thanks for the post! You hit a couple of points there that really resonated with me, like the importance of exercises. I’m sold!
I’m looking for software developers on location in Toronto, Canada
We’re building a system that enables cryptographically secure chain-of-custody on distributed infrastructure without a ledger. We’re using some ideas from the cryptocurrency world, but we’re a traditionally-funded startup with quality investors and paying customers.
I’d love to hear from developers interested in DevOps work, front-end web and mobile in ClojureScript and React Native, and backend developers who are comfortable with distributed systems. Previous experience with Clojure, cryptography, and security would be an asset.
Let’s talk over e-mail - my address is in my profile
“without a ledger.”
That part sounds refreshingly different after all the blockchain startups in that space.