I didn’t post this because I agree. It’s very relevant to software engineering. We should understand and be aware of culture, expectations, other opinions on how to get stuff done, what to be cautious of, etc. It’s also important to understand that this does not represent views of the entire community. Would like to see thoughts both from people who agree and those who disagree with this mindset, and why.
Can the downvoters explain how this is off topic to software engineering culture? Want to understand so I don’t post off topic in the future.
An American Sickness: How Healthcare Became Big Business and How You Can Take It Back. I’m only 3 chapters in but it’s very insightful if you want to understand the mess that is American Healthcare and how we got here. A bit depressing + infuriating but strongly recommend.
How would you describe the alternative desired state? That insecure protocols don’t exist? That engineers would have deeper knowledge of cryptography?
Distributions of major server software would come with good configurations out of the box, alleviating every developer from being responsible for configuring things.
https://caddyserver.com/ is a great example of this; you configure it to do what your app needs, all the TLS defaults are well curated.
While I agree that a “reasonably secure default” should be standard, mostly you have to find a trade-off between security and compatibility. If you need support for IE8, there’s no way around SHA. If you want to support Windows XP or Android 2, there’s no hope at all. If you want it more secure (as of today) you fence out most Androids (but 4.x), Javas, IEs, Mobile Phones, non-up-to-date browsers. Unfortunately, there is no one size fits all.
On the other hand, compatibility with older software is very easy to figure out (people see an error message), whereas insecure configuration appears to work perfectly fine. I also believe developers are more likely to know that they need to support some obsolete software (modern web development doesn’t “just work” on IE8 or Android 2) than about the newest TLS configuration options.
I think if you want that, we ought to have APIs that express things in terms of goals, instead of implementation details: ssl_ciphers +modern +ie8 maybe. Then it’s clear what needs to be changed to drop a platform, instead of it being a guessing game.
I agree with this post, and yet Slack remains the path of least resistance. I finally broke down and created one for my main open source project. Since creating it a few weeks ago contributors have started trickling in and a community is slowly forming.
We can still export chat logs via bots and create the desired permalinks to discussions. Clojurians slack does this, although I’m not sure what the tool is that they use.
troll
wasn’t trying to troll. Annoyed by the ridiculous 4/1 tradition and how it seriously does make me question anything I read on 4/1.
I signed up to see what it’s like, and I’m greeted with a dialogue that, in order to post, has a button labelled “Toot!”
I can’t shake the feeling that this is could be really, really weird.
The explanation I was given is that birds tweet, while mastodons toot (like elephants, and other proboscideans). Apparently “trumpet” was also considered as the verb, but dismissed as unwieldy and possibly too grandiose.
I do note that the contemporary canonical reference on animal onomatopoeia agrees that “the elephant goes toot”.
My exact thought.
I think web has become so ubiquitous that many FE engineers wouldn’t even consider writing a GUI in anything other than html/js/css. The popularity of tools, frameworks, conventions, experience, docs, community, etc all built around web far outweigh native as far as I can tell. On the other hand, projects like React Native help to bridge this gap, so I’m not sure what the reasoning is.
Some is just ease of cross platform support I think. If they were writing for one platform it’s not clear they’d pick the web stack just for its tools/community. But Slack supports six client platforms: Windows, Linux, Mac, Android, iOS, and webapp. The solutions for that are either to write six different native apps, or use web tech, more or less. There’s also, conceptually, the category of cross-platform GUI toolkits that compile to code native for each platform, but those projects seem to be far from healthy these days (Tk? wxWindows?) and I don’t believe any has been extended to support all six of those platforms.
I can appreciate the time savings of just doing things once, but for a company like slack, they should be able to toss a few billions at another dev team or six.
I wonder if Jenkins will ever escape its legacy of being slow, monolithic, heavy, old school CI. This feels like a step in the right direction, but possibly just lipstick on pig. Today we have quite a few modern alternatives (both oss and saas) that have been well-designed (both from a systems design and UX perspective) from the ground up without all the baggage. A few that come to mind: Drone, Travis, GitLab CI, CircleCI, Concourse, Shippable.
It already did. Setting up Jenkins using a Jenkinsfile and the pipeline plugin quickly done and is a nice tradeoff between having an easy to get started with setup with a lot of options in the long run.
There’s no CI server that handles inter-project dependencies and long build pipelines as good as Jenkins.
I don’t agree. We’ve been using Jenkins 2.x for the last ~11 months and it’s been a nightmare. Jenkinsfile is a bug-ridden incomplete implementation of the Groovy language that errors on valid code constantly. This is consistent with its legacy and reputation.
Good point on inter-project deps; not something I need very often so I don’t get to take advantage of it.
I’ll have to agree on Jenkinsfiles being both great, and horrible.
I just debugged a problem that turned out to be a \ in an sh directive. Groovy is possibly the worst language I have ever used for string manipulations.
The fact that there’s a flag called “force” tells me that is probably not a recommended work flow. In the context of distributed applications, i don’t think rewriting the history of a shared object is desirable.
It’s really useful in situations where you push a commit and then the tests fail, or you notice a typo, and you don’t want to litter the commit history. When you’re trying to figure out why someone did something it’s much more useful to review one commit that’s a larger change than seven small commits, most of which have poor commit messages (“WIP fix test”, “Fix test”, “Typo”, “Initial work on the controller” &c).
seven small commits, most of which have poor commit messages (“WIP fix test”, “Fix test”, “Typo”, “Initial work on the controller” &c)
Ideally the committer would cherry-pick or otherwise replay/rewrite these commits into a new “this is really it” branch and pull requests that instead. I know that isn’t always the case however and if it is a world wide public repo you could get a really terrible pull request :)
It’s never appropriate for master (all objects are shared). It’s fine on private feature branches (no branch-specific objects are shared - only the parent history is). You never rewrite shared history–only the history that is local to your private branch.
Only if that branch wasn’t pushed before. If it’s out there, it’s out there. I understand the reason people do it, and i understand in which cases it’s okay to do it (by that logic); i just don’t agree.
Sure, once it’s pushed, it’s public. But that doesn’t mean anyone is basing their commits off it. It’s a process thing. Also, if you pushed to a private fork that no one has access to, then you’re guaranteed to be able to safely force push. --force-with-lease makes any force push safer in all of these cases.
Lots of options to build a team’s process around.
Seems a bit excessive to always force push IMO, even with --force-with-lease. Or did they mean that they always used --force-with-lease only when they otherwise would force push?
only when they otherwise would force push
This is how I read it.
It would make no sense to always force push :)
+1
Rebasing a feature branch on latest master is a common way to keep your branch up-to-date and your history linear. It avoids one or many merges from master into your branch and the eventual merge of that mess back into master. I’ve done this for years.
But when you benevolently “rewrite history” like this, you must use –force to push your branch (such as when you’re ready to open a pull request). I’ve always use git push -f to push my feature branch quite sure that I’m not overwriting anything because I’m the only one working on it. But there’s no guarantee. I’ll start using –force-with-lease, just to be sure.
This is exactly how I operate; until now I’ve always carefully checked the output from the push to ensure the “from” commit matches my expectation, but it’s awkward and prone to error. This is a very nice addition.
Realised my cycling events this year aren’t as far away as I thought they were, and thus have ordered a few upgraded parts for the bike to get used to them ahead of time. This week will be split between getting back into a training regime and fettling the various upgrades onto the bike as they arrive. (Very interested to see what difference shortening my cranks 2.5mm makes.)
Also just ordered a Raspberry Pi 3, as not having to mess with wifi dongles is a major bonus, and it has Bluetooth LE. First project to be a “how many people in the house right now” API I think.
Also just ordered a Raspberry Pi 3, as not having to mess with wifi dongles is a major bonus, and it has Bluetooth LE
Have you checked out http://getchip.com/? They don’t ship until June 2016, but for $9 it’s pretty amazing. I ordered 5 last November.
Ah! I did see those at the time, but didn’t jump on that bandwagon. I got three Oaks a month or so ago but haven’t had a chance to play with them yet (firmware is still beta), but they should do a similar job I think.
Cool, I hadn’t seen that one. It’s great how ubiquitous micro-controllers with built-in wireless are becoming. I spent hours programming and soldering BT chips for Arduino Minis. Never again!
My CHIP from the crowd funding campaign turned up over a month ago - I need to sit down and play with it - it’s a neat device.
I’ve got less that 14 days to get my son’s BMX race ready (it’s missing wheels and cranks :~/) Looking forward to a busy year of BMX racing. Shorter cranks should give you more power - no excuse for not winning now!
Pardon my ignorance, but how do shorter cranks give more power? The tradeoff with a longer crank is less force required but over a greater distance correct? My recollection of classical mechanics is not great.
The impact of crank length on power is surprisingly complex [pdf]!
This is the nature of cloud computing. If you can’t handle downtime you need multiple machines behind a load balancer. If you can’t handle data loss you need backups or data redundancy or both.
It’s not a question of “if” your server will disappear; it’s “when”.
This looks great! What about a feature where you can import your data from LinkedIn? I like keeping my LinkedIn as the “source of truth” for all my work history, skills, projects, etc, then using it to generate a resume. Open to thinking differently about this.
Hey! Thanks for the feedback. Yes, I’m thinking of implementing this particular feature but for that I needed to setup a server for the same. Right now, it is hosted on GitHub Pages.