Over time I have found fakes to be significantly better than mocks for most testing. They allow refactoring internal code without failing all the tests, and you can always simulate an error path by overriding a specific method in the fake.
A fake is a full implementation that takes shortcuts in its implementation. For example, a fake database could store the data in-memory and not guarantee transactions to be isolated. But it would accept inserts, updates and deletes like a real database.
A mock is a barebones implementation that confirms the methods were called in specific order and with specific values, but without understanding these values.
For example, I used to have a MockLogger, it would implement the .info, .warning and .error as jest.fn() functions, and then I have to check that the functions were called with the appropriate values. But now I have a FakeLogger that implements the methods by saving the logged messages to an array, and then I check that the array contains the messages I want.
https://martinfowler.com/bliki/TestDouble.html though it is fairly common to talk of Mocks when a different kind of Test Double is meant
Yes, indeed. I ended up there as well. Nevertheless, was fun to see this little tidbit and see why perl is, for lack of a better word, strange.
Now we just need someone to package python + pip into a zip on GitHub and setup a bash script to install that to install tailwind. (kidding, I do get that the goal is to get everything to also be implemented in python)
I’m not really sure I understand the desire to port projects that were relying on Node into Python, Ruby, etc. Node “won” the competition around how to compile front-end assets years ago. If Node requires too much fiddling around, why not improve the Node ecosystem, instead of porting a bunch of the logic to another platform?
It is open source code though, so people should work on whatever makes them happy inside.
https://deno.land/ is a great alternative to node
Nothing.
But in most cases, $foo
and "$foo"
produce the same result, except the latter is slightly wasteful and slightly uglier. And in the fewer cases where they do differ, "$foo"
produces the wrong result 99.9% of the time. So if it’s what you want then there’s nothing wrong with it, it’s just exceedingly rare that you do want it.
And because of that, I find that writing it as "$foo"
doesn’t sufficiently emphasise that I specifically wanted that, so I write those cases as something like '' . $foo
so that it will be absolutely clear that the code was written that way deliberately. So actually, I do in fact never write "$foo"
.
So that’s what is wrong with it after all: nothing from the computer’s perspective, but it communicates intent poorly to the next programmer.
also see : https://convos.chat/
You got to play NCSNIPES, right?
That was the best feature of Novell Netware, I used to play ncsnipes for hours with friends.
The best SRE recommendation around Memcached is not to use it at all:
Don’t use memcached, use redis instead.
(I do SRE and systems architecture)
… there was literally a release yesterday, and the project is currently sponsored by a little company called …[checks notes]…. Netflix.
Does it do everything Redis does? No. Sometimes having simpler services is a good thing.
SRE here. Memcached is great. Redis is great too.
HA has a price (Leader election, tested failover, etc). It’s an antipattern to use HA for your cache.
Memcached is definitely not abandonware. It’s a mature project with a narrow scope. It excels at what it does. It’s just not as feature rich as something like Redis. The HA story is usually provided by smart proxies (twemcache and others).
It’s designed to be a cache, it doesn’t need an HA story. You run many many nodes of it and rely on consistent hashing to scale the cluster. For this, it’s unbelievably good and just works.
seems like hazelcast is the successor of memcached https://hazelcast.com/use-cases/memcached-upgrade/
I would put it with a little bit more nuance: if you have already Redis in production (which is quite common), there is little reason to add memcached too and add complexity/new software you may have not as much experience with.
it’s pretty much abandonware at this point
i was under the impression that facebook uses it extensively, i guess redis it is.
Many large tech companies, including Facebook, use Memcached. Some even use both Memcached and Redis: Memcached as a cache, and Redis for its complex data structures and persistence.
Memcached is faster than Redis on a per-node basis, because Redis is single-threaded and Memcached isn’t. You also don’t need “built-in clustering” for Memcached; most languages have a consistent hashing library that makes running a cluster of Memcacheds relatively simple.
If you want a simple-to-operate, in-memory LRU cache, Memcached is the best there is. It has very few features, but for the features it has, they’re better than the competition.
Most folks run multiple Redis per node (cpu minus one is pretty common) just as an FYI so the the “single process thing” is probably moot.
N-1 processes is better than nothing but it doesn’t usually compete with multithreading within a single process, since there can be overhead costs. I don’t have public benchmarks for Memcached vs Redis specifically, but at a previous employer we did internally benchmark the two (since we used both, and it would be in some senses simpler to just use Redis) and Redis had higher latency and lower throughput.
Yup. Totally. I just didn’t want people to think that there’s all of these idle CPUs sitting out there. Super easy to multiplex across em.
Once you started wanting to do more complex things / structures / caching policies then it may make sense to redis
Yeah agreed, and I don’t mean to hate on Redis — if you want to do operations on distributed data structures, Redis is quite good; it also has some degree of persistence, and so cache warming stops being as much of a problem. And it’s still very fast compared to most things, it’s just hard to beat Memcached at the (comparatively few) operations it supports since it’s so simple.
There is one aspect where Ed maybe has an actual advantage: It keeps you in write mode and discourages editing. I will consider using Ed for journalling where I currently use vim.
There is one aspect where Ed maybe has an actual advantage: It keeps you in write mode and discourages editing.
cat > $filename
will do that, too, but with ed
I can switch back to command mode, save what I’ve done so far, and then continue by returning to append mode.
Though I could probably do the same with cat >> $filename
, but I’m afraid I’d forget that I need to type > twice to append and end up overwriting the file. :)
This is why I prefer writing drafts in a chat with myself. Also because of the enforced pacing: the rhythm of hitting Enter when a line is done, and the leaving it as it is.
Have you heard of vi
? It’s a “visual” mode for ed
. A truly amazing innovation. It lets you see the file while entering ed commands, and changes get reflected immediately.
Isn’t that, mostly, sam?
I frown at putting all of Perl into a chroot, but there isn’t really a good alternative. You could use FastCGI, run the Perl process outside of the chroot and leave its socket in /var/www
, so that httpd/nginx only has access to the socket and there are no Perl guts inside the chroot to use, but the Perl script should really then be chrooted separately.
This gets much uglier with big things like Ruby on Rails.
The purpose of the perl-in-chroot portion of the article was more like ‘hey, this is how you would do it if you wanted to’. As I mentioned in the article itself, I only host static content.
Hopefully it'l eventually support something like proxying to a second http daemon, so for Perl you could chroot perl + Starman, and have httpd proxy to it
cron is one of those necessary tools that could really use an evolutionary functionality step to clear cruft like this out of the way. Another example is error/output handling, an issue that makes a tool like cronic necessary: http://habilis.net/cronic/
https://wiki.archlinux.org/index.php/Systemd/Timers#As_a_cron_replacement if you’re ok with the creeping horror that is systemd
XUL was great and really hasn’t been met or passed by any UI tech since. It certainly could have been improved, but sadly was abandoned instead
was abandoned instead (c) Mozilla
wait, I thought the article said exactly the contrary:
So, except native OS panels and menus, HTML is the UI tech that met or passed XUL, no?
One could even say (or read, in the article) that XUL’s
-moz-box
layout actually bootstrapped CSS flexbox.The key part there is the ‘nowadays’ bit. When XUL was introduced, it provided wrappers for native controls and also had deep integration with XPCOM so that you could build your own components that exposed per-platform widgets and expose them easily with XUL. It was closer to a cross-platform XAML than HTML.
HTML is a very different beast than XUL. It isn’t really designed for building desktop UIs and lacks both widgets and obvious layouts. Yes, you can always fake them with enough CSS (and sometimes JS) but it’s not the nice simplicity of XUL. Android’s LU layout format is probably the closest.