What does it address though? I mean seriously, do you really think you can resist in really dangerous situations?
It’s protecting you against the police, who operate under a legal framework which prevents them from beating you with a rubber hose but not from obtaining your fingerprints.
I left the caveat out for the sake of brevity, but like I said the threat model this functionality is addressing is not one where the attacker can utilize any means necessary. Is there any practical system which can address that scenario?
This brings up a good point though, is it true that, let’s say TSA agents, can force you to unlock your phone with your fingerprint but not with a passcode? Honest question.
I don’t know about TSA, but it’s true that cops can.
As far as I know, fingerprints aren’t protected under the fifth amendment but passwords are: http://mashable.com/2014/10/30/cops-can-force-you-to-unlock-phone-with-fingerprint-ruling/#g3MF5oyDTOqN
I randomly generate text in the same way I do for passwords when I need to fill in a security question
Pretty much this, I haven’t put a real answer into a security question in years. I use random actual words though, from a passphrase generator, because I’ve seen companies that want you to verbally verify the security question answers to do certain things. Yeah, I know, pretty WTF, but what can you do?
Ugh. I’m pretty happy sticking with Python 2, but this post is so bad I’m tempted to switch to Python 3. Even as a joke the Turing complete section is just stupid.
I couldn’t tell whether the Turing Complete section was a joke or just profoundly confused, to be honest. Conflating the language with the bytecode, claiming that one VM can’t run the bytecode generated by another language (or maybe complaining that the bytecode changed? or that there’s no py2 compiler targeting py3 bytecode?), and trying to tie that to a fundamental property that even “languages” like SQL and Excel have ….
It was all very muddle-headed.
Don’t get me wrong, I know he’ll claim that it was a joke, later. That’s not in question. I’m just not sure if it actually is a joke.
I don’t think this is meant as a joke. The “post” is a chapter in his book Learn Python the Hard Way.
Difficult To Use Strings
Wait.. what? Strings are a mess in Python 2. They have been cleaned up in Py3.
Python 3 Is Not Turing Complete
No comment.
Purposefully Crippled 2to3 Translator
It took me about one day of work to port a 70k lines django application to Py3 with the 2to3 tool. That’s was one year ago. Since then I’ve only found two bugs caused by the conversion. Doesn’t seem that bad to me.
Too Many Formatting Options
Yes, I can agree with that. This is the only valid criticism in that chapter.
I agree, but just as a data point, it’s taken about 3 people-weeks for us to port a large (~200kloc) Django application (over the course of several months).
The points that made this take a while:
We tried to maximize the amount of changes that were Py2 + Py3 compatible. This meant pulling things in over time, heavy usage of six, and catching a lot of bugs early on. Highly recommended for easy reviewing by third parties. For example: a couple changesets that were just “use six for more imports”.
we deal with many different encodings, and a lot of old text-to-bytes conversion code was pretty brittle. Py3 forced us to fix a lot of this
imports! 2to3 generated way too many false positives for our taste (hard to review the real changes), so it took a while to find the right solution (for us: a monkey-patched six that would minimize changes)
changes of standard API from lists to iterators generated a decent amount of busy work. Changes for the better, though.
Handling binary files. Here, too, we were often doing the wrong thing in Py2, but it would “just work” before.
Lots of dependencies that we needed to upgrade to get the Python 3 support.
Pretty minor things around bytes’s defautl __str__ method. For example, checking the output of a process call, we would do if output == "'0'", and that would fail because `“b'0'” != “‘0’” but that turned out to cause more issues down the road.
issues around pickling + celery. Our solution mostly centered around lowering usage of pickle even more (dangerous)
deployment issues. Juggling Python 2 tooling and Python 3 tooling in the CI pipeline would sometimes mess things up.
I can only recommend https://pypi.python.org/pypi/modernize
Instead of translating x.iteritems() to x.items(), it translates it to six.iteritems(x), and adds the six import. Fixes 80% of the boring stuff and you only need to focus on unicode/bytes, etc.
The idea of Python 3 was to make iteration cleaner the easier to read and understand. Now we have to insert calls to six in every loop and for every string. The result is hideous code that’s harder to read, not easier.
I was writing this because
We tried to maximize the amount of changes that were Py2 + Py3 compatible
If that is not your objective, you can just use 2to3 and be ready.
By the way, I really do not understand, why the CPython devs haven’t kept iteritems() as an alias to items() in Python 3 with a deprecation warning to be removed in Python 4. I cannot imagine that it would have been a massive maintenance effort. But on the other hand, making a function call instead of a method call is not rendering code unreadable. I have never heard a Python dev complain about calling str(foo).
Essentially, Python 3 adoption has not been slow because of “readability”, but because Python 3 bundles fairly boring changes (iteritems->items) with this massive unicode change (requiring quite a few changes to code bases) and few breathtaking new features that weren’t available in 2.7. This changes at the moment with Async, matrix multiplication operators, etc.
Joke or not, the fact that we even have to ask that question significantly harms the credibility of the article.
Just out of curiosity, why are you sticking with Python2 for now? At this point all of the major libraries have been ported to be 2/3 compatible. In addition, while there aren’t any huge new features in Python3 (besides the byte/Unicode separation) there has been an accumulation of small quality of life improvements that from my point of view make it very much worth it to switch.
Not sure about the comment author, but I’ve personally moved over some of my open source libraries to support both python 2 and 3. Moving larger project is tricky though as it involves updating the language, all dependencies and forking the ones that don’t support python 3.
FWIW I’m super happy this project exists, and that they’re exploring something with the potential of being so much better
Can’t help but notice that most of those success stories are 10-15 years old.
I feel like what I want to know most is not being addressed: why, in 2016, would I choose Common Lisp over anything else? In contrast to 2000, most or all of what made Lisp special is available in many other languages today – homoiconicity (Clojure, LFE, Elixir), a strong object system (Scala, Ruby, Perl), native compilation (Go, Rust), etc.
You listed 8 different languages. Part of the appeal of Common Lisp is that it has the advantages of all of those languages in one.
Can’t help but notice that most of those success stories are 10-15 years old.
Last time I checked, ITA (bought by google), Grammarly and SISCOG are in business today and use CL today.
[Comment removed by author]
If “what technology startups are picking” is our barometer than we’re never going to escape the pop culture. The options afforded to greenfield development aside, startups are not exactly incentivized to make good engineering decisions.
[Comment removed by author]
Can the lisp community mold itself into something that appeals to outsiders in today’s world? I think it can, but it hasn’t yet.
Well put!
Right, “most” is a word that leaves room for a few counterexamples.
Even so, two of the three companies you mention wrote the bulk of their Lisp code prior to ten years ago. I also hear from people inside Google that most of the ITA Lisp code has been rewritten at this point.
My question remains: why use Lisp now?
The examples I picked are the ones highlighted in the main page, not ones I cherry picked. I’ve taken the time to count them all. Of the 15 examples, 7 are current and 8 are old. The author has gone out of its way to not only list use cases from the ‘glory days’ of CL. Given that people mostly write in what they are familiar (the reason why C#’s GC was written in CL and then mechanically transformed, the author already knew CL) and that the Lisp community is small, it seems to me that the author done a fair job in listing current success stories of Lisp.
I don’t know people inside google, but given the fact that people at google are still contributing to SBCL features from their own fork, like the fast-interpreter ~5 months ago, or build their lisp code with Bazel, there is still lisp code there, don’t know how much.
Common Lisp provides all of the above, for one thing. :-) So you don’t have sacrifice X for Y and struggle with the tradeoffs.
What Common Lisp provides, specifically, that the above struggle to, is a fully integrated system of interactive development in the tradition of Smalltalk. I’m tempted to believe that Erlang might provide such a thing as well, but I regret to say that I don’t understand it well enough.
Common Lisp provides, further, a strong cross-platform story; if you’re interested in old-school enterprise development, Franz and Allegro both provide enterprise integrations in their commercial offerings. CL also has a reasonably functional JVM port (ABCL) as well as a subset (mocl) compiling for iOS and Android. Parenscript is also on offer to compile to Javascript.
Simply put: if you want to invest in Common Lisp for your company and deploy it everywhere, it will take you through the entire modern stack of development without having to port functionality into new languages. You can build a company on it and deliver products with it, for platform after platform, product after product. I think out of all the other existing languages, only C provides the same level of cross-platform capability, and at a profoundly lower level of capability.
What Common Lisp does not provide, however, is strong compile-time type checks: for that, Rust and Scala (from the above list) stand out. Libraries can be an issue, if you have some genuinely complicated problems that you don’t care to address in-house.
What Common Lisp does not offer, further, is the “whipitupitude” that Perl & Perl’s children so prize. It and its community have valued studied and thought out solutions to problems over quick hacks. This has made it less than perfectly popular in the 2008+ Zeitgeist.
If I wanted to get back into writing CL, what compiler would one use (OS X)? I had a license ages ago for Allegro, but am more interested in SLIME + ??? these days.
Either CCL or SBCL. CCL will have more OSX-y integrations; SBCL is tuned finely for Linux and works well on OSX; SBCL is, AFAIK, what 80% of people in the open source community use, however.
I’d suggest seeing what your Allegro license will get you today. It’s the case that SLIME pretty much integrates with everything out there AFAIK.
The purpose of this page is to sell you on CL. To do an honest comparison between other languages would take a considerable amount of effort, outside the scope of this project. That is why most of the ‘comparisons’ are mostly fluff, touching syntax and using weasel words/marketing slogans like ‘pythonic’. The goal of the page seems to be to sell you on CL and to provide clear, easy to follow instructions to set up your own CL development environment.
Some further comments:
native compilation (Go, Rust), etc.
that is a property of an implementation not of a language
a strong object system (Scala, Ruby, Perl)
What does strong mean in this context? Seems like a weasel word to me. I know Perl has a MOP but not much about it. Ruby does have hooks like to respond_to_missing? but afaik (I may be wrong) slots/attributes are not instances of classes themselves, and so the programmer is not able to extend and modify their behaviour.
Clojure
I don’t know much about Clojure, but someone who does and thinks CL is better designed
More importantly, a PL is more than a collection of features. It is also important how the features play of each other. Now, by no means is CL perfect. I think the MOP could be further improved. For example Pascal Costanza has written about the woes of make-method-lambda. But it does have a lot to offer even 20 years into the future.
Everything you can refer to in ruby is an instance of class (including Class, Object etc). Methods are definitely instances.
Oh I do so hope Apple counter whacks them with a DMCA suite for reversing engineering a protection mechanism…….
…. my sense of Schadenfreude would know no bounds.
Logically, how is a hardware patent more patentable then a software one? Difficult to invent / investment needed to create / etc.
What’s an argument for the discrepancy?
I don’t think anyone is under the impression the discrepancy is logical; I’ll go out on a limb and say that nobody involved with the law in a professional capacity expects that. There has been a history of court cases (at least in the US), which have moved the line of what is patentable back and forth. Distinctions have been drawn both in statute and in court rulings, and I don’t feel competent to summarize which of these are presently “in use”, since even that is a complicated question.
Early rulings, prior to the widespread public understanding of what sofware was, concluded that software was no more than a concrete expression of an algorithm, and that although its concrete expression is copyrightable, algorithms are simply expressions of mathematical truth (this was the court’s position; of course many programmers will disagree), and as such are not invented by humans.
So, you asked for “an argument” and I can give that, but I’m afraid I can’t speak as to the logic of it. :)
Hardware often also needs the means of manufacturing. (e.g. being able to create chips of a smaller size)
The general idea of patents was to encourage people to open up their inventions and means to the general public in exchange for the protection. It this context, that still makes sense. Much effort is involved in finding those methods.
Serious question, does anyone know how enforceable this really is? Maybe I’m optimistic, but if this ludicrous statement actually protects anyone I’d be a little surprised, and sad.
At the moment, I’m not aware of a single case where a company has been held liable for a security breach of end-user data, even in the absence of any disclaimer. My vague understanding of the law is that users would have to not only have experienced some specific financial loss resulting from identity theft, but be able to prove the entire chain of criminals who had resold and eventually exploited their data, showing that the original harm resulted from this particular security breach. In a world where Target only learned they were hacked because security researchers saw credit cards for sale and did statistical analysis to figure out what those cards had in common, that is an impossible thing to prove.
So, someday this disclaimer might actually meet an enforceability test, but not any time soon.
Only if whoever does it is prepared for the fact that almost all businesses will fail, and won’t give in to pressure to say most companies are more or less okay.
I feel like advocacy organizations need to be run by someone who feels some anger, if they’re going to be effective. :)
We know that there are many reasons for running an ad blocker, from simply wanting a faster, cleaner browsing experience to concerns about security and tracking software.
I have to keep repeating this over and over again, because people don’t seem to get it.
I don’t want ads because I don’t want to be manipulated into buying things I don’t need. I especially don’t want to allow this manipulation while I’m in the middle of something else.
I’m ok with Wired deciding that either I accept manipulation or I pay. It’s their terms and their website, but I want them to be honest about what ads are supposed to do: convince people to buy things they don’t need. That is their primary purpose. Not some goodwill support of web publishing.
The use of “you” wasnt meant to imply this was a personal message. There are also readers that are concerned about speed and security.
Judging by the amount of upvotes I get whenever I bring this up, a lot of people feel like me but somehow this point of view seldom gets articulated. Everyone always talks about about privacy, security, and speed of online ads, but few people seem to talk about what ads are really about and if we, as a society, should be having what ads are doing. The proponents of ads usually say something like “I can put up with them”.
But their ultimate purpose, is it good? Are ads an indispensable part of our society? Must we inevitably put up with them? Does anyone really enjoy watching as much ads as possible? Would we have some kind of societal collapse if we just banned ads, like the Cidade Limpa initiative did? That’s the conversation I want to be having.
Upvotes don’t necessarily mean people agree with you; it can equally be that your comment is well-written or the like.
I think your tendentious description of the purpose of advertising is wrong. It would be more accurate to say that ads are supposed to convince people to buy things and are indifferent towards whether they need them. But even that’s not really true; a repeat customer is far more valuable than a one-off customer. The purpose of ads is to make as much money as possible. To the extent that customer behaviour rewards actual value, customers' and advertisers' interests will be aligned.
I think “manipulate” is also misleading here. There is rarely a hidden agenda in advertising; everyone knows what it’s about.
We can have that conversation. I think there are good and bad ads. I will sometimes go out of my way to watch ads (in moderation, as with most other things I enjoy) - particularly if you’re including stuff that blurs the line between advertising and content (e.g. native advertising, product placement). My understanding was that that initiative in Sao Paulo had had mixed results and been scaled back?
For what it’s worth, I upvoted because I agree with the statement. My brain is a precious thing, and even its just a small inclination that one product/company is better then another because I’ve seen a funny ad, it’s not worth it to me, It feels gross, my brain real-estate is not for sale.
I think “manipulate” is also misleading here. There is rarely a hidden agenda in advertising; everyone knows what it’s about.
That’s the thing, everyone thinks that they are too smart to be fooled by ads. Clearly, everyone is wrong, because if nobody were being fooled by ads, then advertisers would be giving up.
The thing with ads is, kind of like placebos, they work even if you’re aware that it is an ad. A lot of the time all that ads want to do is make sure that you’re keenly aware of a brand. It doesn’t matter how you feel about Apple or Nike; all that matters is that by watching ads you’re aware that they are an option at purchase time.
This is a strange argument, that I am somehow better off being ignorant of my options at purchase time. I disagree.
It’s not a choice between ignorance and knowledge. It’s a choice about where that knowledge comes from, and whether your awareness comes from organic factors or is awarded to the highest bidder.
This is why having both ads and and organic factors is required, making ads not inherently evil.
Ads tell you what’s available on the market. This is why targeted ads are such a big thing and why people opt out of privacy in exchange for learning about products they need. Or cynically “care about but do not need yet market capitalusm whatever tricks them”.
Organic factors like reviews, both by professionals and normal users and your friends' experience with the brand or product are important to make the price/quality or value/utility calculations between the competing available options.
We might get by with only word-of-mouth but that would certainly put a high barrier for entry for new companies and products, only strengthening the existing monopoly-like situation.
Highest bidder ought to be delivering value - after all, they’re making enough money to make high bids. There’s nothing to guarantee that any “organic factor” will have your best interests at heart.
Again, “fooled” is a tendentious characterization. Ads evidently make money (otherwise advertisers wouldn’t bother with them). That doesn’t require there to be any deception going on.
I find it facanating and scary that Github has managed to attract a number of large projects to move over. In python’s case though I wonder if there was a github equivalent for hg would they have moved to that over GitHub?
I don’t think so.
There were some discussions a while back, posted here or HN, about moving Python away from Mercurial to Git. I don’t recall GitHub being mentioned at the time, but there seemed to be a strong desire to move to Git. The biggest issue was that most contributors said they were more comfortable with Git, but I think there may have been other issues with Mercurial.
I’m also hesitant about everything moving to GitHub, but have to admit it’s a pretty great tool.
I don’t recall any technical objections to hg itself. Well, there was this, which is a response to a bunch of uninformed opinions about what hg can or cannot do.
It’s the great tragedy of hg: since it’s not popular, its features are not well-known, and its critics argue strongly from a position of unfamiliarity with it. Thus, it keeps viciously cycling deeper into obscurity. ;_;
Probably not. They are moving to git+github because their contributors are already more familiar with these tools+platforms than hg.
Not so much git as github. They just want whatever is popular because they want more contributions. They also don’t want to sysadmin themselves. That’s why gitlab and bitbucket were discarded.
Curious, why does this scare you? Generally I agree, less centralization. But in this specific case, it’s sort of easy enough to move away if something were to happen, GitHub has showed little signs of being evil (at least that I’m aware of), and has overall been a great thing for open source.
And as a plus, like others have mentioned, many are familiar with GitHub, so maybe their contributions will increase.
I think it is not always quite so easy to move away. Lots of links and with no way to redirect them after a move.
Yes: it’s not just the code, but the trail of issues, comments, and pull requests alongside that give it meaning. When the Cooper Hewitt Smithsonian Design Museum acquired the application Planetary in 2013, they got the whole repo transferred to them:
Although Bloom folded in 2012, its three principals have not only gifted the code for Planetary to Cooper-Hewitt they have also given us explicit permission to publicly release the source code under an open source (BSD) license, and its graphical assets under a Creative Commons (non-commercial) license.
…
As a research institution we are also interested in reaching new understandings of the ways designers use code that can be gleaned from the code itself.
As we are acquiring a source code from the version control system that it was managed in (also GitHub), we have been able to preserve all the documentation of bugs, feature additions, and code changes throughout Planetary’s life. This offers many new interpretive opportunities and reveals many of the decisions made by the designers in creating the application.
I’ve personally found that once a repo is on GitHub and using their pull request system to manage changes, it becomes incomprehensible outside of GitHub. The history and discussion there ends up trapped within their interface. I’m also concerned that GitHub is replacing (or has replaced, depending on your view) SourceForge as “the” place to host code without any meaningful competition. Looking back at what happened with SourceForge, i’m slightly concerned we may be heading down the same path.
I suppose it’s a sad commentary on modern society that this actually somewhat reassures me, but it does.
Reassures you in what situation?