Seems fair I guess. They probably made thousands of easy ad dollars off Nintendo’s property, so it’s normal they have a problem with this.
However, is Nintendo actually making profit of the original Zelda, for example? I mean, is there a way for me as a player to get to play the original Zelda without having to search for a second hand NES and fishing for the original cartridge in flea markets? I get that is their intellectual property, but still it’s not like they still sell those games
The current philosophy of the law is that Nintendo has an eternal right to tax Zelda. It was never meant to go into the public domain, will never go into the public domain, and if legislators have funny ideas about this stuff then they’ll use their billions of previous culture tax revenue to bribe (er… “lobby”) them to have the right ideas again.
Anyone who gripes about this state of affairs is obviously a commie trying to steal from them.
In my understanding, in France and probably other countries, works (not sure what, but writings and musics are included for example, probably programs/video games?) enter public domain 70 years after creator’s death.
How can this apply to a living company?
The original author(s) license (indirect in employment contract or direct via a specific one) rights to the work. The ‘death’ clause becomes really gnarly when the actual work of art is an aggregate of many copyright holders.
This becomes more complicated as the licensing gets split up into infinitely small pieces, like “time-limited distribution within country XYZ on the media of floppy discs”. Such time-limit clauses are a probable cause when contents to whole games suddenly disappear, typically sublicensed contents like music.
This, in turn, gets even more complicated by the notion of ‘derivative’ work; fanart or those “HD remakes” as even abstract nuances have to be considered. The stories about Sherlock Holmes are in the public domain, but certain aesthetics, like the deerstalker/pipe/… figure - are still(?) copyrighted. Defining ‘derivative’ work is complex in and of itself. For instance, Blizzard have successfully defended copyright of the linked and loaded process of the World of Warcraft client as such, in the case against certain cheat-bots - and similar shenanigans to take down open source / reversed starcraft servers.
Then a few years pass and nobody knows who owns what or when or where, copyright trolls dive in and threaten extortion fees based on rights they don’t have. Copyright in its current form has nothing to do about the ‘artist’ and is complete, depressing, utter bullshit - It has turned into this bizarre form of mass hypnosis where everyone gets completely and thoroughly screwed.
These aspects, when combined, is part of the reason as to why “sanctioned ROM stores” that virtual console and so on holds have very limited catalogs, the rightsholders are nowhere to be found and can’t be safely licensed.
Yep, Nintendo do still sell these games, and it is possible for you to buy them. I bought one of these last week.
I just got a NES Classic and SNES Classic. They are pretty dope! I think that they are starting to care a lot more now that these are a thing :)
This does, however, have the unfortunate side effect of players not being able to play their favorites unless they are one of the ~60 games on these two classic editions. So, that’s sad. :(
Having emojis, which are more and more often used by various tools (often developed in macOS), would be nice. Even better is proper unicode support for non-English languages. Chinese characters in particular are hard to work with as they can’t be aligned properly.
What’s the problem with Chinese characters? They align correctly (that is, fullwidth) for me on Windows console.
What’s the problem with Chinese characters? They align correctly (that is, fullwidth) for me on Windows console.
I’ve made a CLI GUI application and while Chinese characters displayed fine in Linux and macOS, they had alignment problems in Windows. Specifically, the characters would overlap their container and be drawn where they shouldn’t. Chinese characters seem to work fine in regular CLI app but in more complex, full screen ones, it doesn’t seem so reliable.
The worse open-plan office I’ve been in is when developers were sharing the room with marketing. There was time it was literally impossible to work or concentrate. On busy day, marketing guys would spend all day talking loud on the phone. On quiet days, they’ll spend most of the time chatting loudly with each others. Even when I had urgent work, I had no choice but to give up and browse the web or go out, since I couldn’t do any work. Talk about productivity.
It’s not even a criticism of the marketing department - they enjoy their job and good for them, but it was absurd to put us all in the same room.
Having an open office is basically telling your employees that you see them as nothing more than cattle.
Basically what the web was before the recent trend of “minimalism” where links and buttons look like text, all text is light gray on light gray background, nothing work without JavaScript enabled, etc.
If they have to call this “new” trend “brutalism”, why not. I’d call that common sense.
the first computer programs with mainstream success were word processors.
The early web was filled with “huge images to do designs that HTML doesn’t support”, and Flash existed for a reason. People have always tried to lay stuff out in different ways
The brutalist web might have always existed in a certain subset of the web, but it stopped being the web ever since image tags and tables were a thing.
problem is people will create the same design but make it with 10 layers of css and javascript to slow it down.
The three things that always bothered me about my 17-inch MacBook Pro and why I switched back to Windows are:
[]{} and I needed to press and memorise 3-keys shortcuts like Alt+Ctrl+5 for each of these characters).Other than that I loved the POSIX environment and terminal emulator but these issues bother me too much.
Installation is a bit too complex for a minimal blog generator. Maybe create a Homebrew formula for it?
While I’d argue it isn’t that complex, you’re generally right – I plan to automate the installation and customization process.
“That’s right. A web server. Your CPU has a secret web server that you are not allowed to access, and, apparently, Intel does not want you to know about.” Rejoice!
The letter from Andrew S. Tanenbaum is interesting too:
Apparently an older version of MINIX was used. Older versions were primarily for education and newer ones were for high availability. Military-grade security was never a goal.
The bug report (which will probably have the patch, too) is still locked, and the details of the exploit aren’t public yet. My guess is that it’s related to this:
Devices with the Play Store, as well as AOpen Chromebase Commercial and AOpen Chromebox Commercial will be rolling out over the next few days.
They probably don’t want to release details until everything is patched.
I use Firefox on desktop and mobile, but I found that it’s often necessary to switch back to Chrome for some websites, which are either way too slow or plain broken.
This is a pity, more and more developers target Chrome only, do all the optimisations for it, and kind of assume it will work on other browsers. Chrome is basically the new IE and Google knows that well. Now they can ship whatever change they want that will optimise google.com and YouTube and too bad if the rest of the web and other browsers are broken as a result.
It seems to be aimed at a specific project, but which one?
I don’t think what he describes is a general attitude in open source. In my experience, the biggest issue in small projects is that many, even very popular ones, are pretty much unmaintained or abandoned with nobody with a permission to merge PR.
I’ve never seen a project that refuses PR due to an LTS policy. Some however don’t want to merge because it breaks backward compatibility, which maybe is what this article is talking about. But it’s generally a good thing to have someone to say “no” to a change if it’s going to break the build of dozens of users.
On my spare time I continue working on my note taking app Joplin. Having a try with an Electron client for Windows at the moment. This framework is new to me but I’m impressed how easy is to get things running on it.
This looks pretty neat. Interesting that they chose node.js to write it in. I can see the sync with Onedrive and various other services being super useful for some people.
I’ve used Notational Velocity a lot in the past. These days I just use Markdown and The Silver Searcher :)
Thanks, actually I’ve started with the Android app in React Native, and then I figured I could re-use most of this code to create a desktop client. There are some drawbacks working with JavaScript but it definitely makes it easier to write cross-platform code. Silver Searcher with markdown files seems like an interesting custom solution too!
I was pretty un-interested in this (after all, vim and grep is my text-based TODO management system), but when I kept reading and discovered you had a console-based app syncing with a mobile client, it really got my interest. Nice work! Frankly, I still see a lot of jank in the stack you’ve chosen, but I won’t bother with criticism. Its just an awesome app. :)
However, I must now attest to wondering what better technology to accomplish a ncurses->objcMsgSend nirvana?
Would you do another app with this, now you’ve done one?
Thanks, I realise it’s not the most popular stack though in this case it got the job done :)
I’m not familiar with what ncurses->objcMsgSend is? Is that a macOS thing?
ncurses on the console (terminal), objcMsgSend on the iOS side of things. Its just a euphemism for what you’ve done .. albeit not a very accurate one. :)
Pity this doesn’t come with an emacs mode. It could leverage a lot of org-mode to do the actual note-side magic …
Have you tried deft? It is one of my favorite Emacs packages, and I use every opportunity I find to shill it.
I have, but I’ve not gotten a chance to dive very deeply into it. You’ve inspired me to take another look.
I’m curious, what emacs features would be useful in this app? It’s possible to change the text editor in which the notes are opened, so it can be set to emacs too, but I guess it’s not what you mean?
Org-mode has todo functionality, outlines/hierarchical lists &c. — you could use what it offers rather than having to reimplement for yourself. It’s basically a Markdown alternative with intelligence added.
Writing Joplin as an application within emacs (leveraging org-mode) would also free you from a lot of the nitty-gritty details of dealing with terminals, redisplay, panes/windows, writing a command mode &c. It’s pretty cool (and the reason that I’m more than a bit of an emacs fanatic). Basically, you could stand on the shoulders of others, which is always awesome.
Certainly, what you’ve already built is pretty cool. One advantage of writing it in JavaScript — as you have — is that you can easily share the backend between your mobile app and your CLI program. There’s no good story for doing that with elisp right now (or, probably, ever).
On the other hand, you have to embrace Emacs; and for many, that’s a hard sell. Elisp hasn’t been very good performance wise for years, the UI can be problematic even if it’s completely reconfigurable, and Emacs is basically a silo, as you’ve implied. I myself have tried to make peace with Emacs, but it never clicks for me.
Technically you don’t have to write too much elisp. Emacs now supports shared module libraries. I’ve written emacs stuff in go.
As a follow-up, here’s an example of a something cool org-mode can do: https://blog.lizzie.io/linux-containers-in-500-loc.html & https://blog.lizzie.io/linux-containers-in-500-loc/contained.c are both generated from https://blog.lizzie.io/linux-containers-in-500-loc.org.
This looks pretty nice. Node.js has surprising good tooling for command line tools.
There is a similar set of projects for Ruby called TTY, which I’ve used a fair bit in the past.
Yes there’s quite a few good ones. However, I’ve been going through most Node.js terminal lib recently, and one issue I’ve noticed is that they often are one-man projects, and end up being unmaintained. For instance ncurses, blessed and vorpal, are all pretty much discontinued despite being large and having a lot of issues/pull requests.
Terminal Kit is still active after 8 years and the author responsive, which is great, and it also has a very good documentation and tutorials.
This is a nice summary for someone who might have written JS a decade ago. It hits the main points in the field of ‘weird things you have to do to write JavaScript these days’. But there’s one major omission - package managers. I still have a hard time explaining to myself why we have bower, npm, and yarn all solving the same problem with varying degrees of success.
Bower is officially deprecated. Yarn was created to solve problems that existed in npm < 5 [1]. With the release of npm 5+, you can ignore package managers other than npm at this point.
[1] Exactly reproducible dependency trees & performance being the main ones.
[1] Exactly reproducible dependency trees
Did they remove the ability to re-publish a given version of a package? Hopefully - if not, this feature is just a cake-lie.
From the npm 5 release announcement:
npm will
--saveby default now. Additionally,package-lock.jsonwill be automatically created unless annpm-shrinkwrap.jsonexists. (#15666)
This package-lock.json is what provides reproducible dependency trees. It’s the equivalent of Yarn’s yarn.lock.
Well. I guess I have a different meaning of reproducible.. Previous versions of NPM would have variance in things like Makefiles that gyp puked out.. this meant that something that was “npm shrinkwrap’d” and then tar’d up, would be different every time. This is the reproducability I was hoping for.. guessing the lock file doesn’t give me that.. But I will play around and see.
Not sure if it’s possible to enforce that. You would need to ensure that any postinstall hooks are deterministic, so no calls to Math.random() or new Date().
You also would have to forbid making network requests, accessing the filesystem and really anything that leaves pure javascript land.
It still seems like yarn is faster, according to a few blog posts. The margin is much closer, however.
If nothing else, it’s much more consistent. At my company I’ve been migrating our apps from using bower and npm packages to exclusively npm packages, and setting our build servers to use yarn instead of npm. It solved our issue with npm intermittently crashing our builds.
I use yarn because it’s way faster than npm, doesn’t even compare. I develop under WSL though so maybe the difference is more noticeable there.
I use yarn because it’s way faster than npm
This has not been my personal experience. npm 5 has been consistently as fast or faster than yarn in my testing on Windows 10. YMMV.
I’ve just tried deleting the node_modules directory in my current project with 780 packages and ran npm install- it took 53s. Then I’ve done this again and ran yarn install and it finished in 27s. It’s possible it’s something specific to my project or to the fact that I’m running under WSL (maybe npm hits some under-optimised part of it), but anyway in my particular case it’s indeed faster with yarn.
exa is written in Rust, so it’s small, fast, and portable.
-rwxr-xr-x 1 root wheel 38K 28 Apr 20:31 /bin/ls
-rwxr-xr-x@ 1 curtis staff 1.3M 7 Jul 12:25 exa-macos-x86_64
?
Stripping it helps a bit… but not much though.
$ du -hs exa-macos-x86_64
1.3M exa-macos-x86_64
$ strip exa-macos-x86_64
$ du -hs exa-macos-x86_64
956K exa-macos-x86_64
More fun is what it links to:
$ otool -L /bin/ls
/bin/ls:
/usr/lib/libutil.dylib (compatibility version 1.0.0, current version 1.0.0)
/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
$ du -hs /usr/lib/libutil.dylib /usr/lib/libncurses.5.4.dylib /usr/lib/libSystem.B.dylib
28K /usr/lib/libutil.dylib
284K /usr/lib/libncurses.5.4.dylib
12K /usr/lib/libSystem.B.dylib
$ otool -L /tmp/exa-macos-x86_64
/tmp/exa-macos-x86_64:
/usr/lib/libiconv.2.dylib (compatibility version 7.0.0, current version 7.0.0)
/System/Library/Frameworks/Security.framework/Versions/A/Security (compatibility version 1.0.0, current version 57740.60.18)
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1349.8.0)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.8)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2)
$ du -hs /usr/lib/libiconv.2.dylib /System/Library/Frameworks/Security.framework/Versions/A/Security /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation /usr/lib/libz.1.dylib /usr/lib/libSystem.B.dylib
1.6M /usr/lib/libiconv.2.dylib
9.3M /System/Library/Frameworks/Security.framework/Versions/A/Security
9.7M /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation
96K /usr/lib/libz.1.dylib
12K /usr/lib/libSystem.B.dylib
To be fair, exa is a self-contained executable, while ls probably has a dependency to libc, which it loads dynamically. If Rust ever becomes very popular and its runtime is installed by default everywhere, its executables will also be a few KB only.
FWIW, linking ls from GNU coreutils statically with musl-libc on x86_64 gave me a 147K ELF with no shared object dependencies.
For that to be true Rust would have to have well defined and stable ABI. Which it doesn’t have right now.
Rust binaries actually do dynamically link to libc. Its standard library, which calls libc, is statically compiled into binaries.
Recaptcha is the most insidious of those since it’s unavoidable if you want to use whatever site is using it. uBlock can disable webfonts and block adsense and analytics, Privacy Badger replaces share buttons with local copies, and Decentraleyes does the same for javascript libraries.
e: oops, double-posted
On a tangential remark, did anyone notice how Google took Recaptcha and re-purposed the idea by transitioning from deciphering words to helping Google Street View with corrections? I remember I used to have to decipher some obscure text back in ‘09, before Google bought Recaptcha. Now I have to choose “store fronts” from a bunch of photos, or identify which fixed width x height segments of a larger photo contain a street sign. We'VR become free workers to their product, and that really rubs me in the wrong way.
I do most of this. My point is that the author seems to be implying using firefox will help stop google’s hold over the entire web, when it really will not. You need to be careful, avoid certain services and we be willing to live with the inconvenience. :-(
reCAPTCHA is driving me nuts though. Does anyone know of an alternative I could deploy which doesn’t depend on old php code?
The latest version of reCaptcha is mostly JavaScript code with a thin server-side script for validation (which can easily be implemented in any language).
You’d still have to connect to Google to receive the challenge (and you’d still be helping to build their AI)
0.16 -> 0.17 was a big change but I’m puzzled by the implied criticism. Does version 0.16/0.17 indicate “ready for production”? I always inferred from the versions and the sparse documentation & libraries that Elm is experimental, so I’m very cautious about using it for any production stuff (and so I wasn’t bothered by the 0.16 -> 0.17 transition). Projects in experimental phase can and should change, wouldn’t you say?
Unfortunately, these days nobody seems to indicate clearly (or even to know?) whether the project is stable and suitable for production use, and that’s a problem, but it’s definitely not unique to Elm.
React was at 0.14 but in production on things like Instagram for a while. I think Elm is decently ready for production, but there is the whole Clojure-style “at the whims of the BDFL” vibe going on, esp. with the subscriptions change.
That being said, I think it’s mostly fine. Types make this a lot less painful than (say) Py2 -> Py3. If you really like Elm, this is a small cost to pay compared to the pain of using a pure-JS (or even Typescript) environment. In the worst case, it’s a vibrant experiment in building front-end apps. The tools and practices that are showing up are more important than the language IMO.
There is also the whole thing where you can stick on older versions for a while if you don’t want to do the change just yet. It’s not like older stuff disappeared. Py2/Py3 sucks but if you were on Python 2, you were mostly fine.
I think the more frustrating thing is that there wasn’t a clear deprecation path. Ideally, you have one or two versions where both mechanisms are in place, so you can gradually port things .
Though I say this as a Purescript user who has all dependencies break at every minor version release ;)
You do that much triangulation and marketing about being the “practical” alternative, some expectations are implicit to that.
Fair enough. I guess I just don’t believe any tech marketing after having a sufficient number of painful lessons. My own experiments showed that Elm is still quite a long way off from being production ready, and I treat it accordingly.
shrugs We use GHCJS in prod. It may be even less mature than Elm or PureScript, but it doesn’t dictate to us how to build our app, we share datatypes with the backend, and the compiler is very hackable. The GHCJS creator is very easy-going and modest as well.
Yep and I wouldn’t want to be the guy who has to debug this once the lib is deprecated (code generated from the simple “form” example):
var _user$project$Temp1472650027426490$viewValidation = function (model) {
var _p0 = _elm_lang$core$Native_Utils.eq(model.password, model.passwordAgain) ? {ctor: '_Tuple2', _0: 'green', _1: 'OK'} : {ctor: '_Tuple2', _0: 'red', _1: 'Passwords do not match!'};
var color = _p0._0;
var message = _p0._1;
return A2(
_elm_lang$html$Html$div,
_elm_lang$core$Native_List.fromArray(
[
_elm_lang$html$Html_Attributes$style(
_elm_lang$core$Native_List.fromArray(
[
{ctor: '_Tuple2', _0: 'color', _1: color}
]))
]),
_elm_lang$core$Native_List.fromArray(
[
_elm_lang$html$Html$text(message)
]));
};
var _user$project$Temp1472650027426490$update = F2(
function (msg, model) {
var _p1 = msg;
switch (_p1.ctor) {
case 'Name':
return _elm_lang$core$Native_Utils.update(
model,
{name: _p1._0});
case 'Password':
return _elm_lang$core$Native_Utils.update(
model,
{password: _p1._0});
default:
return _elm_lang$core$Native_Utils.update(
model,
{passwordAgain: _p1._0});
}
});
Antivirus processes are often unkillable, even as admin, so there must be a way although I guess it’s not pretty.
You either use an existing framework, or you end up re-inventing one… poorly.
I’ve yet to see a web framework I truly enjoy using. Most of them don’t even try to tame the incidental complexity of the web, preferring to heap on even more. I think this is because the type of people that make web frameworks often love the web to the point where they’re blind to the incidental complexity.
These frameworks seem to take special delight in taking over every aspect of your application, because ‘convention.’ Apparently, one of the greatest evils of software development is that there is no standard directory for
modelsunless we institute The Right Way. Meanwhile, massive coupling makes testing difficult, causing years of “Fast tests using this one weird trick” presentations continue to be given.The best libraries are the ones you can lock away somewhere and forget about.
Enjoyability seems to me to be a bad criterion to judge a tool on. I may enjoy one hammer more than another, but I still need to use specific ones for specific tasks and the ones I enjoy less are no less functional and suitable.
I don’t think the remainder of your post depends on your remark about ‘enjoying’ a web framework, which I feel is extra evidence for that not mattering.
Also “Enjoyability” is based on what timeline you measure it on. If you measure it based on day 1 enjoyability, day 100 enjoyability or day 1000 enjoyability. Stuff like unit testing and fuzzing might not be very enjoyable on day 1 – might be far more enjoyable on day 100 and put you a state of absolute bliss with same testing and fuzzing on day 1000.
Of course web frameworks are not optimal. However, I take the leaky abstraction here and there any time over the mess I have seen with non-framework code. I did Python starting with Python 2.3 which was released in the early 2000s. Back then, I didn’t do Python web development much, yet every now and then I wrote something or looked at options how to do things. This was the time of
mod_pythonand still cgi. Nowadays we have Django, Pyramids and if you feel like having a bit more freedom - Flask. I must say I wouldn’t want to go back.Potentially, if you have very special requirements, that actively go against the typical patterns, no framework is an option, but other wise it isn’t. at least I wouldn’t like to take over maintenance of such a codebase.
That’s actually quite a nice indicator: “If someone would use that advice, would I like to take over maintenance of that code base?”.