warning: long, winding response due to passion for topic.
Yeah I definitely agree. I was discussing something similar with a friend of mine but in terms of art and gaming. Much like mom and pop shops free indie gaming, while centralized on places like newgrounds and friends, used to much more accessible, unique, and popular.
In addition to Amazon and eBay, I think an important market place especially for small makers is etsy and mercadolibre (in south america).
It’s odd, despite technology propagating, development being made easier and a more common skill, people are opting for marketplace experiences. I guess this is because of the indexing you mention, but culturally, I think it’s come from shorted expectations of request fulfillment (read: instant gratification). We want an answer on the first page, find the movie we’re looking for in the first few rows, etc. etc. You mention this as well in terms of things that producers are optimizing for, and taking that into consideration it seems like a vicious cycle that leads to consolidation of attention. I.e. we want to know a thing and use the internet to know it quickly, people who want to make money of us wanting an answer will make it so that we use their service to answer our question and thus make it faster to attract our interest, we use it because it’s faster but after awhile we expect/need it/desire it to be answered even faster, ad infinitum.
Well, that whole shtick, fundamented by the fact that individuality in all things ain’t cheap, is difficult to maintain, and might not actually achieve anything. At least for a mom and pop shop. In 2019, how is not making a small retail website not similar to not making the hammer to build the chair one sells, or the TVs one sells?
In 2019, how is not making a small retail website not similar to not making the hammer to build the chair one sells, or the TVs one sells?
This is how I feel. And there are other examples. Most people don’t write their own blogging software any more, they use an existing piece of software. Most people don’t write their own software to run tabular calculations of various sorts, they use Excel or similar.
This is literally the entire history of our industry: something is novel and people experiment with how best to do it –> best-practices and reusable modules emerge and a novel implementation becomes tedious or even a liability –> something else becomes novel and the cycle repeats.
It’s story time!
Exactly a week ago today, my wife and I took our puppy, a black goldendoodle aptly named Vader, to get neutered. The day before his operation (so Sunday, 02 Jun 2019), I took my dog on his “last adventure” before becoming a eunuch. We usually walk every day around five to ten miles. Half way through, it starts to rain. Not much, just a few sprinkles. We continue walking, thinking this was as bad as it was going to get. I couldn’t have been more wrong.
The heavens opened. Things got moist. Drenched from head to toe in less than one second–it looked as if we had dunked ourselves, clothing and all, in a pool of water of the wettest kind. And we still had two miles to go to get back to our car.
I’m happy to report that my dog LOVES the rain! I cannot wait to take him camping and hiking with me. I’m going to train him to run next to me as I bike.
This experience reminded me back when I hiked the Bob Marshall wilderness as an eleven-year-old pipsqueak who weighed only eighty pounds. We hiked fifty miles in five days; ten miles per day. We had to carry everything in our packs. My pack started out forty pounds, but with a devious older brother, ended up being around fifty after he continuously snuck small rocks in to my pack each day. An eighty-pound weakling with a forty-to-fifty pound backpack. Four out of the five days, I always brought up the rear, sobbing and crying the entire way.
It rained four out of the five days straight. We pitched our tents in the rain, slept in the rain, woke in the rain, hiked in the rain. You get the deal. On day five, the rain stopped. What a relief! However, the non-aqueous environment was now filled with mosquitoes as big as your thumb’s last knuckle. The bringer of death travelling from miles. I’m sure that ten boys and three adults not showering for five days brought a stench that would even offend Sam. You’d wipe a death herd of twenty mosquitoes off your arm, and twenty more would immediately replace them.
I broke down. I had enough. My entire body ached. The only way backwards was forwards. So I marched on, faster than even the sixteen-year-olds with their long strides. My dad had a hard time keeping up with me. He and I were the first to the cars. But neither of us had car keys. And the last mile stretch was through desert…
I looked back at that experience as my seven-month-old puppy and I walked and had fun in the rain. The winds picked up to around fifty miles per hour as we got closer to our car. I thoroughly enjoyed this experience at thirty-three years old. Why is it that my seven-month-old puppy enjoyed walking two to three miles in this torrential downpour, yet I hated a similar experience at eleven years old?
At just seven months old, my dog is teaching me more about life than I ever thought possible.
So, I tell those two stories to set the stage for this week:
It’s gonna be a crazy week!
This was a great story, paired with an incredible link. Thank you. Makes me jones to camp again, fall asleep to trees rocking with wind and rain, waves crashing, that sorta thing. Eat bean salad.
I’m not sure what @friendlysock sees here as growth hacking. It’s a well structured community biographeme on a modern technology. Sure some comments are whatever, but there are others from capaj and swiftonespeaks which are rather deep.
As the principal member of a 1 man full stack team, graphql has been in my periphery as a tool by which I may be able to DRY up data access patterns and facilitate frontend development. I was unaware of the different orm integration tools, so that’s definitely cool. However the mentioned verboseness of efficient data query bindings makes me think it might be not be best for what I am looking for (reducing cognitive overhead and overall workload). Could be interesting to try for interactive analytics however, since that’s a good place to flex query layer flexibility.
I don’t know, seems like a lot of news is posted here, not to say anything about the actionability requirements.
I wasn’t the downvoter on either of your stories, for reference. Just familiar with how the community tends to react to things.
I wonder how this is spam… figured sharing new features of a common web dev tool would be useful. :’(
I wouldn’t have called it spam, but I wouldn’t post it here either - there’s a new one every month, and chrome actively informs you of it when you open the devtools.
That’s fair, but a well written walk-through of new features is a little more helpful to me than the “check out what’s new” notifications when I’m trying to do work.
This should definitely be more known news, especially given that things like Django build their datetime localization functionality on the back of pytz. Guess I should probably look at migrating to datetuil… Aside, I didn’t know that dateutil did timezone work, I’ve mostly leveraged it in the past for its flexible date parsing.
I suppose I should apply for a hat for comments like this, but:
Switching Django to another time-zone library is a possibility, though it’d be a significant change and we’d have to handle it carefully. Our time-zone support is built in a way that, for a lot of common cases, avoids the need to work directly with the lower-level
pytz code. Specifically:
True, we tell you to use
django.utils.timezone.now()to get the current
datetimeor as a default for models to store (and what comes out of it – and what we store in your DB – is a UTC
TIME_ZONEto tell Django your preferred default time zone, the forms system parses incoming
datetimevalues as if they’re coming from that time zone (and uses
pytzcorrectly when doing so).
TIME_ZONE, the display helpers (like the
localtimetemplate filter, for example) will correctly use
pytzto perform conversions for output.
django.utils.timezone.activate()function, which lets you do things like per-user time-zone preferences, and will affect the form and display helpers.
Also, our timezone documentation tells you, if you have to manually convert a single
datetime to a specific time zone, how to do it correctly. And the very next section there tells you not to try to work with localized
datetime objects in code, but to let Django do conversions for you at input/output boundaries and work only with UTC in your application code.
This is much like Django’s Unicode support, where the framework does the messy work at the boundaries, rather than trying to teach every developer on earth how to do that correctly.
If you’ve got suggestions for how to do that more effectively, patches are welcome :)
Thanks for responding so thoroughly! Since I’m using Django REST, I wasn’t aware of Django does that with its forms. DRF seems to be able to, with its serializers, but requires slight modification. Perhaps that’s the problem with dealing with things at the boundaries, not everyone is going to use the same gatekeepers; I guess that’s the caveat with “common cases.”
For now I’ve been using a signal to ensure that certain models are in the timezone set with
TIME_ZONE using the approach you linked to in the documentation, just to ensure that now matter what (de)serialization mechanism I use, I ensure things are in the correct timezone. My case however, is a little different, because I gotta sometimes convert first to a specified time zone and then translate to UTC.
Re patches, I think in effectiveness Django’s fine, but the problem here is correctness, given that pytz’s basis for timezone offsets can be as trustworthy as a hobbyist’s approximation. I looked at contributing to Django like a month ago, for changing pytz to dateutil, wouldn’t some sort of discussion be required before actual development?
The place to discuss such a proposal would be the django-developers mailing list; there’d be a lot to work out as far as how such a transition would happen, whether it’s worth doing, etc., and the mailing list is the place for that.
While probably from system’s perspective rather simple, I thought this was clever! Goes to show how bit operations and macros can really clean up an efficient idea.
The only thing that’s wrong with this is you should have confirmed with the manager or person who approved your contract confirming that they want you to stay in the office while waiting for the assets to materialize.
Sometimes, big companies are happy to pay 16.5k to someone to adapt to their processes and 1.5k for a website.
Sometimes, big companies are happy to pay 16.5k to someone to adapt to their processes and 1.5k for a website.
I’m aware of $300K in annual spend that goes to an unused SaaS tool because two departments can’t agree on whether or not to kill it - but the cost is spread over both budgets, so neither can act on their own. It’s been renewed 3 years in a row.
Welcome to the world of enterprise IT. There’s a fortune to be made if you can do it without losing your mind or your will to live ;)
There’s a fortune to be made if you can do it without losing your mind or your will to live ;)
Jokes on them, I lost my will to live a long time ago!
out of pure curiosity, what does this magical 300K SaaS tool do? in my own experience, corporate SaaS is easily dropped as it is not tightly integrated in the business or part of “critical” processes.
Nothing - it’s unused, remember? ;)
Log and metric ingestion. It merrily hums along, collecting data and sending it to a service which receives no logins and sends no alerts.
I’ve no idea what sustains it beyond institutional inertia and the complexity that arises from having two owners on different branches of the org chart.
Well he they did send an email detailing they hadn’t received them and the following Monday the manager welcomed them with open arms. Easy to see that they’d wanted you there
I definitely am in the same sort of situation. I find myself to be an apt web developer, but strive to become a systems engineer: one with know-how of networking, OSes, databases, and the like. I’m partial to structured learning supplemented with projects as opposed to more free-form project-based learning, so my resources reflect that:
Teach Yourself Computer Science has really helped me get structure a learning path to gain a better understanding of computers and computation. The quality and variety of the resources is fantastic, the sequencing is also quite nice. It’s not a strict curriculum but will definitely get you on the path. Before I found it I had already done many of the things mentioned in the Programming section, but have since gone through a good part of Computer Architecture and Algorithms and Data Structures (the practices, I found better learning through Coursera and EdX). I’m going to start the OS section this week (ordered a physical copy of Operating Systems: Three Easy Pieces).
The Morning Paper Definitely something I don’t read enough of. It’s an incredible blog where Colyer writes a deep summary of a paper from some computer science field everyday.
Class Central A MOOC aggregator that’s served me will to find good classes.
Learn theory and compare ideas critically. If you follow the hype and what’s popular you’ll find yourself bored again soon.
I think it’s so easy to fall into that trap. And in the front-end world, even if you don’t fall into that trap, it’s the status quo. So you end up dealing with dev culture with the wrong values.
My strategy is to find things that are genuinely interesting, and to also keep in mind that it may not be interesting forever. There will be a time in the future where I will be bored again. But as you’re saying, if I remain critical and focus on fundamentals, that time will be a long ways out.
On the other hand, in this move towards CS fundamentals, it’s all connected. Maybe I’ll just end up floating around, taking periods of time to focus on different areas of computation.
Thanks for all these links! I’ll check out that book. Sounds like a great way to ease myself into the ocean.
Afaict these are “what if” anecdotes, where the author considers the result of submitting those seminal works in the modern academic climate.
Am I the only one to feel that, principally, AT&T is at fault here? As in if they hadn’t mistakenly ported the author’s SIM, wouldn’t this all have been avoided? (Or at least the attacker would have to try try a different approach)
Of course, you can treat someone’s phone number and SIM card as a library card. It has grown to be an identity for many, so you got to handle these kinds of cards very carefully, and it seems in US things like this are super easy to scam.
I doubt that AT&T makes any guarantees that messages sent via SMS are guaranteed to be both secure (encrypted, they are not), timely, and sent to the intended recipient. It’s not what SMS was developed for. Using it as a vehicle for 2FA is inherently insecure, but AT&T cannot be responsible for the security decisions of 3rd parties.
This is not to say that having your SIM ported without your knowledge is a huge hassle for the victim, and as a simple customer satisfaction matter AT&T and others should do better. But legally I believe they are in the clear.
I really like these stories if refactoring, they’re like contextualized programming pearls. I remember on here a while ago a similar kind and of article on sublime text and mmap. It’d be cool to compile into a sort of “architecture of proprietary software” haha
There’s quite a bit of proprietary software I know have interesting architecture: some documented well like Windows NT (yes! it’s actually solid under the hood) and less documented, like Unreal.
I stole the the idea from the book the architecture of open source applications, but it’d definitely be cool to see into those worlds
Rather, the difference is one of heuristics–thinking about information hiding inspires and promotes design decisions that thinking about objects does not.
It’s near the end of the article but one of my biggest, general takeaways came from the “Heuristic Value” section. Namely, designing in terms of implementation, i.e. class/component structures or what not, is limiting in ways we often don’t realize, like a version of the einstellung effect, and learning to ask new kinds of questions is always a pretty good thing.
The code is cool. The reasoning is… weird:
One big project we undertook was changing how we update the Member List (all those nifty people on the right side of the screen).
It’s a pity that such smart programmers have such big projects.
Something along the lines of what @puhrez replied. However, I don’t think the problem (fast set/list insertion/removal operation on a large data structure) is trivial. What sucks is that people who could implement this, work for a company that makes web chat.
If I were a smart programmer, I’d be delighted to implement an efficiency improvement like this at a scale like this, regardless of the domain.
I don’t think that making tools that people enjoy is unfulfilling, especially if you enjoy gaming etc.
It seems that he’s commenting on the perceived triviality of the problem vis a vis the expertise assigned to it. i.e. member list vs aerospace modelling
Comicon PR! Also gonna try to make Adabong Baboy sa gata (philipino-style pork belly stewed in coconut milk) as an experiment for my wedding’s main plates (I’m cooking them)
On Saturday, I’m probably going to do a whole lot of nothing. On Sunday a friend will visit and we’re going to the cinema to watch John Wick 3. Looking forward to it!
If you like over the top violent and grim revenge action flicks that sometimes go to the point of seeming tongue in cheek unbelievable, you’ll love it. The fighting choreography is absolutely brilliant and the cinematography and visuals are, too. The second one is even more crazy (in a good way).
What I really like is that the fighting is a tad more realistic than in your average action movie. There are even realistic judo throws at opportune moments.
I loved the first one, which is as close to a video game sensibility as I’ve experienced in a movie theatre. There is something of a diminishing return to the second one; most of the joy of the first was in the lack of exposition, and there is more of that in the second movie.
It’s not a movie that bears thinking about overmuch, but as a visual experience, as a movie – it’s wonderful.
I found that one unwatchable due to being filmed entirely in first person. Crank reminded me a bit of video game too, and I enjoyed it thoroughly for what it was (trashy over the top action comedy). There are bits filmed in first person, but not so much that it becomes annoying.
That one pretty much seems like the rock band (NSFW) behind it got to make a movie featuring stuff from their favorite movies and games, esp Call of Duty. There’s even a CoD character in it haha.
Makefile or similar is a really good idea: it’s easier and less error-prone than having to remember a whole bunch of steps each time. I think the author would have benefitted from going a bit further: running
make via a VCS hook. That consolidates two steps, and ties the neccessity of running
make with the nice-to-have of VCS (which we might otherwise forget or avoid). Note that this also requires breaking the habit of running
make manually; if it’s deeply ingrained in our muscle memory we could try changing the target name, to break our “autopilot”.
For a site that lives in a single directory on a single machine, it’s probably easiest to publish via a
post-commit hook. My site has a few remotes which I push to as backups/mirrors, so I have one of those publish the site via a
post-receive hook; this lets me commit early and often, without worrying about half-finished things being published (although I also have a separate directory for unfinished work, that I can
git mv into place when I’m happy with it).
:O this is wonderful, I totally forgot about commit hooks. Thank you for this tip! I also have a similar setup,
frontend for dev and
public for build. I’m going to look into this further.
https://githooks.com/ seems like a pretty good resource.
Do you have a proprietary server running yr site, and hence have control of server-side hooks?
I feel, to prevent publishing in unpublishable things, I either gotta come up with some protocol to determine whether a commit contains unpublishable things so not the publish if that commit is pushed, or continue doing it manually, especially since my original problem was not committing, not forgetting to publish.
Since my site lives in S3 I could probably leverage GitHub webhooks/lambda to further automate, hmmmm
I don’t use “server-side hooks”; I push changes from one place on my laptop to another ;)
I make changes to my site via a working copy at
~/blog, which pushes to a bare clone at
~/Programming/repos/chriswarbo-net.git. That bare clone has a
post-receive hook which publishes the site; it also propagates those new commits to a copy on my server and a mirror on github. I actually manage all of my git repos this way; although none of my other projects are Web sites so they don’t do the publishing step.
Regarding unpublishable things: I just stick them in a directory called
/unfinished which isn’t linked to from other pages. When something’s finished I’ll move it to a location which is linked to (either
Having a Makefile or similar is a really good idea:
Make is an amazingly powerful tool as long as you don’t stray too far from its core competency of turning
$this_file.b and building a graph of the dependencies and processes for doing that. When your Makefile has more dummy targets than real ones, that’s a good sign you should have just written a shell script instead. (I’m looking at you, Pelican.)
Oh sure, by “or similar” I just meant a single command, needing no arguments, to build+test+push+etc. which can be easily extended. A script will do, or a complex build system du jour will do; although Make is (probably) fine
I used to use Make for my site (which I render from Markdown using Pandoc), but ended up with two problems:
Makefilebecame very complicated, as I tried to avoid repeating myself by calculating filenames, dependencies of index pages, etc. It seemed to work, but I had to learn a lot about (GNU) Make’s special variables, evaluation order, multiple-escaping, recursive invocation, etc.
I now use Nix, since I was already using it for per-page dependencies, and its language is saner than
Another way to do this would be just have the Makefile test if the VCS is in a clean state or not. If it’s not, you can exit and fail to run with a message “Hey you! commit first!” or just a warning if you want :)
a git example:
git status | grep clean
will exit 1 if clean is not found, and Make will then of course exit as well. I’m sure there is a better way, but the above works quite well in practice. (Obviously there are edge cases: like having a new file with the name clean that is not yet committed)
Downloading the live site works, but you really want a regular backup of your entire workstation (not just hopefully-versioned directories). Tarsnap works well for me, but there are a lot of other options (many open-source.)
“regular” being the key word here. Tarsnap looks cool, perhaps though, I could just have a and encrypt, compress, and push to S3 script on cron to save me a few picodollars. But then again maybe being a cheapskate on secure redundancy isn’t such a good move.
I actually have thought on this occasionally, generally chalked it up to Direct* and fossilization, but this was a great deep dive, thank you for sharing.
I love that this article is a good balance of reflective thought and quick shade.
I actually updated my post a bit, maybe i was a bit too harsh. It’s just that APIs shutting down is bad for startups in this space. If you buy something from AWS you know it will be around for a while. Even with Parse’s graceful shutdown I spent months convincing customers that Stream would be ok and continue to operate. Anyhow I was a bit too grumpy about this.