I’ve decided to get back into coding this week. I’ve tried many, many times since the 80’s but I always seem to derail myself. I suppose I’m a glutton for punishment because I keep coming back to try again. Next year I will be 50 (yikes! I did not think I would be saying that so soon) so I’d like to actually build something of substance that I can be proud of.
I really enjoy talking to other coders, so I’m going to re-setup my microphone and start screencasting/podcasting again. If anyone would like to be a guest or would like to suggest someone just drop me a line at email@example.com. Website(s) will go live at http://coderpath.com and http://milesforrest.com when I’m ready to publish.
Wow. You’re old.
…and I’m not ever smart or good looking either. Not like you, of course ;)
Those who do not understand WebApps are condemned to reinvent them, poorly.
I love Linux, but my machine is still a MacBook Pro using VMWare to run Linux. I started using Linux in 1999, but in 2006 I jumped to the Mac simply because I wanted access to better commercial software and also because I was tired of having the hardware not work 100% of the time.
I want to use Linux exclusively, and have had my eye on the exit door for quite some time, but I’m just not willing to spend my time tinkering on stuff. Linux works awesome on servers, and it is all I will use in addition to other *nix variants, but for now my GUI’s going to be a Mac.
I think this is a rational choice as a developer.
That said, I operate in a reverse fashion, and find no real harm to my day to day. I run Linux on the desktop, and then VMWare + Win10 for the occasional commercial app. The Windows-era holdovers for me include my Fujitsu scanner software, GotoMeeting, Apple iTunes, Adobe Photoshop CS, Adobe Acrobat, and the MS Office suite.
I find I boot my Windows 10 VM about once every few weeks, since 95%+ of the time, I can run on free software or non-free ports to Linux.
I have a dedicated Mac Mini that I run 24x7 because, frustratingly, there is some Apple software that I wish I could access via VM, but can’t. This includes XCode, Keynote, and Pages. Lately, I use a remote access software to use this stuff from my Linux machine in the odd moments that I need it. But given that Apple makes OSX so frustratingly hard to virtualize, and given that Apple has several important Apple-only commercial apps, I can see why many otherwise-Linux users feel forced into running OSX. I really hope these folks recognize their hand is more forced by Apple’s stance on virtualization and insistence on non-free development tools than by Linux’s historical hardware support.
Of all of these, XCode is the real holdover. Keynote and Pages have reasonable iOS and web (iCloud) editions, so with either a web browser or an iPad, one could bridge these somewhat easily. But, XCode still requires OSX, and I imagine this ends up being a pretty serious deal breaker for native iOS developers. I really wish Apple would sponsor development of XCode-only OSX VMs in the same way Microsoft does the same for historical IE versions needed by web developers.
my machine is still a MacBook Pro using VMWare to run Linux.
my machine is still a MacBook Pro using VMWare to run Linux.
I wish there were more programs that used Apple’s native Hypervisor.framework so you could run VMs without 3rd party kernel drivers.
So far only xhyve exists, but it’s full of rough edges. There was another app, Veertu, that was even on the app store, but they renamed it and changed its focus into some vaporware bullshit: https://veertu.com.
Unfortunately I don’t think Apple has a 1st party solution for virtual networking (they don’t even ship a tun/tap driver), but sometimes you can live without. Apple has a solution for VPNs, so you can install VPNs programs without tun/tap drivers that don’t run as root, but I don’t know yet if you can abuse that for virtual networking. I think you can, though.
VMware is still a dream compared to VirtualBox, which is a total shitshow, but even VMware has its warts. There’s always something wrong when you install VMware on a mac. Power consumption used to go up the roof. That’s sort of fixed now, but it has other problems. For example, merely installing VMware increases input lag. You don’t even have to be running any VMs. I am very sensitive to input lag, and it’s immediately obvious to me that there’s something wrong.
I haven’t tried Parallels. Since it’s consumer/Windows focused I doubt it’s what I need. For example vagrant support is simply not there and I doubt it can run OpenBSD/Solaris well, but I should try it one day. All these GUI apps make me nauseous. Xhyve has the best user experience of all (just a CLI program).
My work uses Parallels and I can’t stand using it for Linux. I don’t think it supports port forwarding like VirtualBox (and probably VMware). I couldn’t get it to allow me to SSH into the VM.
Parallels is also bad for power consumption. I would avoid it.
Thanks, awful as expected.
Is it time to move back to jQuery and Prototype.js? If these mysterious patents are about things like “virtual DOM”, comparing trees of state or something derived from FRP, then using Vue, Preact, Angular 2, Cycle, Riot, Elm, reflex-dom will infinge them too.
Then let’s wait 20-30 years until these patents expire and everyone finally can use these nice state-management things.
[Comment removed by author]
social contract of open source
This is something new.
New? Not really. Debian has a well-stated social contract:
Huh. It’s Debian’s “social contract”. Do you see the difference between somebody publishing “a set of commitments that we agree to abide by” and a non-existing unspecified thing that the poster above requires Facebook to abide by just because they published and maintain some open source code?
So, the issue is that they deliberately added this “patent clause” to induce fear to everyone who thinks about suing Facebook? And not that using React is risky?
Does Facebook actually have patents covering React? I’ve looked around a few times and have never seen a link to an actual patent covering it. I would assume there’s gobs of prior art for anything going on in there.
AFAIK they’ve never stated public which ones, if any.
Submarine patents are a thing, sadly.
And yet, there are a number of big companies which undoubtedly have big legal teams, and which seem to be okay with using React somewhere. Just cherry-picking some from the list 
Airbnb, American Express, Chrysler, Atlassian, eBay, Expedia, Microsoft, NHL, Netflix, New York Times, Salesforce, Twitter, Visa, Walmart… At least some of these companies must have had their legal teams look at the license and decide it was okay to use to use React. Which makes me wonder if the hysteria (this is a bit of hyperbole, but it does seem to have some people really worked up) is justified.
You’re assuming they’re using the “off-the-shelf” license. There’s nothing preventing them from negotiating a different license with Facebook. Now, I haven’t seen anything showing that this has happened, but it’s a fairly common practice to have individualized contracts with traditional commercial software, so it wouldn’t shock me.
It would surprise me, though - why would Facebook enter into an agreement with these big name companies that altered the React license out of Facebook’s favor? I don’t think all these companies did that (and I didn’t list every large or well-known company that’s on that link, by the way), and unless they’re paying FB to use React I just don’t know why FB legal would bother with all the work. Individual negotations with legal teams at all these big companies to reach a mutually agreeable license, just so a dev team can use React? It seems really unlikely. Just as unlikely as all these companies paying FB to get some kind of commerical license for React - when there is no suggestion that such a thing exists.
I assume they are paying. Just because things don’t have a price list or an explicit offer of a commercial license doesn’t mean you can’t get one.
Right, I get that. I just don’t think it’s actually happening. Since there’s no evidence either way I guess we won’t be able to figure it out!
I have a grandfathered plan on GMail (used to be called “Google Apps for Your Domain”).
My first thought: “Wait, is this The Onion?” :)
I am convinced we keep making the same mistakes over and over again trying to teach software engineering, but never really get down to the nub of the problem: how to transfer tacit knowledge (https://en.wikipedia.org/wiki/Tacit_knowledge)
I think the answer lies somewhere in his observation that “Process can’t be taught: it has to be mentored”. Maybe we need to learn from the trades, where a student alternates working alongside a journeyman coder and formal book learning.
It doesn’t even have to be tacit, it can just require interactive conversation: http://pages.cs.wisc.edu/~remzi/Naur.pdf
I took a similar road two years ago after swapping HTTP + SSE by
Websockets using Erlang’s cowboy. Being able to publish and suscribe and topics gives you a better place where to start than Websockets. My main issue with MQTT was developers don’t want to deal with other protocol apart from HTTP.
If you have some time I recommend that you check the best Erlang MQTT broker I know: VerneMQ. You can easily create clusters that deal with network partitions.
My main issue with MQTT was developers don’t want to deal with other protocol apart from HTTP.
I have a new service I will be launching sometime in the new year, and I am wanting to promote IPv6 by having two ways to get premium access: 1) Charge a monthly fee 2) Have folks connect to the site over IPv6. All they’d have to do to “pay” for the service is login via IPv6 at least once a month, and their account would be accessible from IPv4 as well.
Perhaps this strategy or something similar might encourage adoption of MQTT as well?
Thank you for the recommendation. We’re doing something similar too and we’re reaching the limits of Mosquitto as it doesn’t have support for clusters.
The problem is in, however, how those images get produced. Take https://github.com/CentOS/CentOS-Do... for example, from the official CentOS Dockerfile repository. What’s wrong with this? IT’S DOWNLOADING ARBITRARY CODE OVER HTTP!!!
What’s wrong with auditing the Dockerfile? Seems to me Docker is a lot more transparent than other methods. Thoughts?
It’s nice that you can audit them, but they’re all written like this. Docker claims it can be used for reproducible builds, but the first lines in every single Dockerfile are apt-get install a-whole-bunch-of-crap and npm/pip/gem install oh-my-god-thats-a-lot-of-packages. Nobody is actually trying to manage their dependencies or develop self contained codebases, just crossing their fingers and hoping upstream doesn’t break anything.
apt-get install a-whole-bunch-of-crap
npm/pip/gem install oh-my-god-thats-a-lot-of-packages
How is this different from build systems that don’t use Docker? Sure, you might be using Jenkins to build stuff (and have to manage those hosts for the OS-level packages), but the npm/pip/gem/jar, etc., there’s no difference. You still have to manage your dependencies. In my experience, the Docker stuff helps with the OS-level packages (previously we had multiple Jenkins hosts that had the versions of things specific to projects – god help you if you accidentally built your project on the wrong host).
I use maven, where the release plugin enforces that releases only depend on releases, and releases are immutable, which together means that builds are reproducible (unless someone used version ranges, but the culture is to not do that). You can also specify the GPG keys to check signatures against for each dependency. It’s not the default configuration and there’s a bootstrapping problem (you’d better make sure the version of the gpg plugin cached on your Jenkins machine is one that actually checks), but it’s doable.
On personal projects and at work I’ve been putting all the dependencies I use in the source repository. Usually we include the source code, for build tools (premake, clang-format) we add binaries to the repo instead.
There are never any surprises from upstream, and you can build the code on any machine that has git and a C++ compiler.
There’s some friction adding a new library but I don’t think that’s a bad thing. If a dependency is really too difficult to integrate with our build system then the code is probably going to be difficult too. If we need to do something easy people will write it themselves.
At the risk of stating the obvious: if you audit the Dockerfile and it says “hey we downloaded this thing over HTTP and never checked the signature” there’s no way to tell if you got MITMed.
Okay, so then you use another Dockerfile (or write your own). This is a very strange tack to take; you may as well say that Rust is an insecure programming languages because with a few lines of code you can create a trivial RCE vulnerability (open listener socket, accept connection, read line, spawn a shell command).
For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked! And when installing via apt isn’t an option, Docker doesn’t keep you from doing the right thing (download over https, check signatures). You’re just running shell commands, after all.
My point exactly. There’s nothing wrong with taking an existing Dockerfile that you find to be suspect, beefing it up by correcting some obvious security issues, and resubmitting it as a patch.
I fail to see what the author of the article thinks is a better alternative. I’m open to be convinced otherwise, but saying it’s actively harmful seems overstated.
For what it’s worth, almost every Dockerfile I’ve used installs its dependencies using something like apt or yum/rpm – and signatures are checked!
OK, so the signatures are checked. You still don’t know what version you got.
Then pin the damned versions (apt-get install <pkg>=<version>), point at snapshot repos, and upgrade deliberately. This problem is totally orthogonal to Docker. All typical package repos suffer from it. I only know of Nix that doesn’t.
apt-get install <pkg>=<version>
This is why companies that care host their own registry for Docker images, just like they’ve done for Java, Python, Ruby, etc., for years. It is unfortunate that Docker didn’t design the registry system to be easily proxied, but this is easily worked around with current registry tools (Artifactory, for one).
This was submitted before, but the course just launched again on September 19th, 2016.
It would probably be better for our industry if everyone knew this about programming, not just coders.
And he’s gone! Here’s a collated book made from the spool:
And here is a collated preface from _why’s old spool:
Actually, I don’t think he’s gone anymore. As long as a whois request of whytheluckystiff.net shows the creation date of 2002, we know that _why still has control of that domain and not someone else. In my mind, he’s the performer he always was, and from time to time his site will come to life so he can grace us with another performance.
Yesterday was wonderful. I enjoyed the drama of my printer coming to life as another page of his book was published. I stayed up way too late hanging out on irc.freenode.net trying to make sense of his work. It was fun, it was social, and it was an event I’ll never forget.
I look forward to _why showing up the way my favourite band or comedian will announce a tour in my city.
Jonathan, if you’re reading this, thank you. You’ve inspired me all over again the same way you did back in 2005 for me when I read through the Poignant Guide. I hope you and your family are doing well, and that you keep creating. Keep searching. I know what you’re looking for. I found it myself. It takes time, but you’ll find it. I promise :) You know how to reach me.
Edit: Aaaand I just loaded whytheluckystiff.net. And I laughed, and my day just got brighter.
For posterity’s sake, _why did disappear. His old domain is owned and controlled by someone else now.
So long _why, and thanks for all the bacon :)
Problem is context. To use a downhill skiing analogy, a double-diamond skier, a green run is simple. But to someone who has never skied before, a green run is terrifying and definitely not simple.
The Dreyfus Model (https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition) gives a good overview of what each person needs when acquiring a new skill.
I don’t think the word “simply” is neither good or bad. It just depends on what skill level the instructions are aimed at.
I’ve already talked about this on Hacker News (https://news.ycombinator.com/item?id=9167452) but I want to reiterate Dan Kubb’s quote “if I’m not embarrassed by code I wrote 6 months ago, I’m not pushing myself hard enough”.
I’m done listening to anyone other than people who still write code, which pretty much eliminates anything TechCrunch says. Uncle Bob said it best recently: programmers rule the world
I’ve been thinking along these lines recently. A lot of ‘tech’ fixates on magical events: releasing popular OSS projects, speaking at important conferences, receiving VC funding, or megacorp job offers. Those are fine and good, but they’re also not terribly common. On sites like HN, you’d think they were, however.
The most disgusting thing about tech: the focus on accrual of status symbols. It’s almost like we’re supposed to be professional Twitterers/speakers/contributors and (oh yeah) sometimes write code for the day job. Quality of code or intensity of problems encountered? Not as important! Brand? Very important!
This is anathema to the pre-Eternal September 2 (er, social) hacker culture, which focused more on deep knowledge over self-aggrandizing.
“if I’m not embarrassed by code I wrote 6 months ago, I’m not pushing myself hard enough”.
The world terrible is for those who don’t do that.
When I first started on the web, I kept meticulous bookmarks. Then Delicious, followed by Instapaper, and then back to bookmarks, but only half hazardly.
Today I use the web page save feature in Scrivener under different related subjects: coding, finance, statistics, etc. By having an actual copy of the website, it helps trigger my memory to provide context to what I was thinking at the time.
I wonder what will still exist in 200 years, or even if anything will exist?
What do I have to pay to opt-out of governments from spying on my family and I?
Historically, the blood of patriots and tyrants.
You make that one out to Smith and Wesson.
One of the most useful starting points for me was Jim Weirich’s “Source Control Made Easy”:
By understanding the underlying concepts first, it makes it much easier to transition to “the git way” than trying to shoehorn your old version control system methods into it.
Full Disclosure: I used to work with the Prags producting podcasts, but had purchased this screencast prior to working with them. I also do not receive any royalties or payment for recommending their content. I really do think it’s great :)
One of the greatest pieces of advice I learned from James was from a podcast interview I did with him back in 2010:
“For the last year or so I’ve been reading a lot fewer blogs and trying to read a lot more papers, academic papers, and papers published by industry”
He talks about this around the 29 minute mark: https://archive.org/details/Coderpath10-JamesGolick
Really going to miss you, James. Thank you for all the work you did and all the wisdom you shared.
Lots of redacted information in those. They’ve complied with the letter of the law but not the spirit of the law.
I for one am happy to see the return of our eccentric friend, even if it’s only for a short while. _why brought so much colour, culture, and fun to the programming world and helped give Rubyists an identity.
We need more bad code. Lots of it. Stuff with gnarls and warts and horrible, disfiguring structures. It doesn’t have to go into customer’s products. It can be like a child’s clay bowl they make for their mom in elementary school, beautiful in its own way. Children and non-programmers ought to be able to express themselves creatively, without fear of reprisal, or mocking from the so called elite. It is from the messy, organic earth of bad code that allows people to grow into programmers who can make beautiful things. Amazing things. World-changing, dent-in-the-universe things.
But first people need to be free to make a mess.