More of a question about the commons clause than its application to these modules in particular, but the link says this:
if your product is an application that uses such a module to perform select functions, you can use it freely and there are no restrictions on selling your product
But the language of the clause prohibits selling of:
a product or service whose value derives, entirely or substantially, from the functionality of the Software.
Why does an application that uses redis as its storage or caching layer not “substantially” derive from the functionality of the software? What does “substantial” mean here? If I write a HTTP wrapper around redis + the redis labs modules can I sell that as a hosted service?
Note that the “idempotence” definition that people use to describe side effectful actions is not the mathematical definition.
In math a function is idempotent if:
f(f(x)) = f(x)
This becomes very relevant in functional programming, as in FP the definition of idempotence is the math definition. Mentioning this in order to avoid confusion.
As always developers are overloading math terms in a way as to confuse everyone :-)
I don’t find the resemblance very intuitive. I was familiar with the meaning in TFA and got very confused when acquainted with the math definition.
Also in FP and in computer science in general the more correct term for executing effectful actions only once and then caching the result for subsequent calls is “memoization”.
correct term for executing effectful actions only once and then caching the result for subsequent calls is “memoization”
Memoizing actions which have observable side effects seems more like the sort of situation where you shouldn’t be using memoization. As I understand it the term usually implies an optimization that doesn’t change a program’s semantics, and that’s only the case when the function being memoized is referentially transparent.
Apologies for commenting on the form rather than content here, but is wikia.com really the place this community decided to organize around? Seeing TV shows and cosmetic products for teenagers advertised next to an article on formal methods is… a strange experience.
That’s strange but submission has many links. Mostly a positive. The only negative I had, which is dependent on my goal of industrial adoption, is that Z notation proved too hard to understand in a lot of projects for a lot of programmers. Alternatives, many coming later, also had a better story in automatic verification of code against the specs, generation of code from specs, prover integration, and so on. It’s still interesting for people studying various kinds of logic or historical use of formal methods. I keep Z papers in my collection just in case their work ever has ideas for solving a new problem.
I wasn’t exactly sure of the best link to post here. There’s a whole book that’s available online, but I thought a portal-type link might be a better entry point.
The Way of Z: https://staff.washington.edu/jon/z-book/
@nickpsecurity, what of Z’s successors would you say did a better job with being understandable to humans?
The main one these days is Alloy. Jackson designed it specifically to address two problems he had with Z, which were that it was too intimidating to beginners and that a lot of valid Z specs couldn’t be model-checked.
Survey is here.
It largely failed due to its learning curve. B method did, too. Both did improve software quality, though. I had a resource diving into the various methods in a detailed comparison whose criteria I want everyone doing model-checking or formal verification to consider and weight in on. Turns out I didn’t submit it even though I thought I did. Oops…
I’ve been referencing the results in comments here: Abstract, State Machines and TLA+/PlusCal came out easiest to use with high cost-benefit analysis. ASM’s and PlusCal can look pretty similar to each other and FSM’s. TLA+ even has similar foundations of Z minus the complexity. If you like Z, you might also like the concept of TLZ where Lamport combined Z with temporal logic. That link has him saying the Z and CSP community ignoring his TLZ work twice. So, he ditched Z and created something better: TLA+. It’s going mainstream with non-mathematicians picking it up thanks to the work by folks like hwayne.
Since I didn’t submit that survey and it’s late, I’ll submit it in the morning around 10-11am as usual. That way people can check it out at lunch time. Stay tuned. :)
I’m more concerned about blanket bans of entire countries people choose to deploy more and more often (not with CloudFlare specifically). After all, Tor is something people choose to use and can just stop any time. Having to circumvent bans just because you happen to live in a “bad country” (as if botnets know about national borders) is another story.
At some point it becomes a fairly straight forward cost/benefit analysis. There are situations where a country wide ban mean the difference between your service being up or down. My organization blacklisted all of China for a while a few years ago because our choices were between our customers in China not being able to access our site and no one in any country being able to access our site.
In our case we had the resources to reverse that ban pretty quickly, but if your service is being DDoS’d from a fairly small geographic region where you have few, if any, legitimate users I can at least sympathize with the decision to forgo traffic from that region entirely rather than make what might be a substantial investment for little return.
A lot of time I’ve seen people do it just because it feels safer that way, even though they’ve never even been DDoS’d from anywhere yet. That’s what I’m advocating against.
If you can’t provide an alternative then I can’t take this seriously, and - until then - I hope that nobody else can either.
What do you need Cloudflare for? I’ve never seen a single use case for it (other than their DNS service, which is well done compared to many other providers). People who claim “DDoS protection” seem to have either picked a crappy web host, don’t know how to use caching, or are running Apache.
An alternative to surrendering your visitors to surveillance capitalism and forcing them to train Google’s AI that will enslave them?
I guess there are some use cases for services like CF, but most of the time it is just incompetency, forced on developers by their managers, or a fascination with bloat. A page without a spinner is just not the modern web!
Does configuring rate limiting and doing load testing before production deployment not count as an alternative? It’s not like we weren’t running websites and dealing with the problems cloudflare tries to address before that service existed.
No. Cloudflare isn’t a “rate limiting” service. Your load testing isn’t going to compare to real traffic. It’s a nice thing to do, but should never be considered representative of real traffic.
A lot of the problems that Cloudflare addresses have become worse due to multiple reasons.
Firstly, this is later in time. Technology has improved. This means that attacks have become stronger.
Secondly, services like Cloudflare weren’t there and people now have to find ways to attack against services like Cloudflare. This means that doing it yourself is substantially harder now, since you probably can’t compete with them in terms of DDoS protection. I doubt you ever saw anyone performing the largest DDoS in the world by hacking into people’s IoT cameras back then, either, but comparing reality 10 years ago to now isn’t the best approach to solving these problems.
How are you going to implement DDoS protection? Rate limiting isn’t doing that for you, it’s just rejecting requests that are excessive. That’s what Cloudflare is trying to do here.
It’s not trying to rate limit, that makes little-to-no sense.
EDIT: Also, if you’re the one that marked my response as “incorrect” then I don’t think that you know what “incorrect” means. It is absolutely correct to say not to consider a non-alternative as an alternative. Downvotes shouldn’t be an “I don’t agree” button.
I don’t claim to understand the nitty gritty of HTTPS but I feel like it should be possible to cache HTTPS requests by having clients recognize the caching server as a certificate authority. My employer has managed to snoop on my HTTPS traffic somehow and I assume it has to do with the root CA my installed on my workstation that no one outside the organization has ever heard of. I’m guessing the proxy is generating a phony certificate for my target domain and then signing it itself while using publicly acknowledged CAs on the other side. I can’t say I enjoy being MitM’d by my employer but if there’s a use case that justifies it, the situation described in the article seems to be it.
Is this not actually what’s happening? Is it impossible to do given the resource constraints of rural Ugandan computing?
What you’ve described is right - it’s not uncommon for companies to snoop on HTTPS traffic by installing their own CA on employees machines. I don’t think it’s a good practice, but it does work in cases where companies insist on doing so, and there’s no reason it shouldn’t work for a caching server.
This is really a non-issue as far as I’m concerned.
Browsers (either standalone or with plugins) let users turn off images, turn off Javascript, override or ignore stylesheets, block web fonts, block video/flash, and block advertisements and tracking. Users can opt-out of almost any part of the web if it bothers them.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site. They probably won’t be as biased as CNN, either.
Nobody pays attention to these rants because at the end of the day they’re just some random people stating their arbitrary opinions. Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever.
Rewind 10 or 15 or 20 years and Flash was killing the web, or Javascript, or CSS, or the img tag, or table based layouts, or whatever
Flash and table based layouts really were and, to the extent that you still see them, are either hostile or opaque to people who require something like a screen reader to use a website. Abuse of javascript or images excludes people with low end hardware. Sure you can disable these things but it’s all too common that there is no functional fallback (apparently I can’t even vote or reply here without javascript being on).
Are these things “killing the web” in the sense that the web is going to stop existing as a result? Of course not, but the fact that they don’t render the web totally unusable is not a valid defense of abuses of these practices.
I wouldn’t call any of those things “abuses”, though.
Maybe it all boils down to where the line is drawn between supported hardware and hardware too old to use on the modern web, and everybody will have different opinions. Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM? I could in 1996…
Should I be able to still browser the web on my old 100 Mhz Petnium with 8 Mb of RAM?
To view similar information? Absolutely. If what I learn after viewing a web page hasn’t changed, then neither should the requirements to view it. If a 3D visualization helps me learn fluid dynamics, ok, bring it on, but if it’s page of Cicero quotes, let’s stick with the text, shall we?
I wouldn’t call any of those things “abuses”, though.
I think table based layouts are really pretty uncontroversially an abuse. The spec explicitly forbids it.
The rest are tradeoffs, they’re not wrong 100% of the time. If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that, it was the corporate website that required flash for no apparent reason that drew fire. The question that needs to be asked is if the cost is worth the benefit. The reason people like to call out news sites is they haven’t really seen meaningfully new features in two decades (they’re still primarily textual content, presented with pretty similar style, maybe with images and hyperlinks. All things that 90s hardware could handle just fine) but somehow the basic experience requires 10? 20? 100 times the resources? What did we buy with all that bandwidth and CPU time? Nothing except user-hostile advertising as far as I can tell.
If you wanted to make youtube in 2005 presumably you had to use flash and people didn’t criticize that
At the time (ok, 2007, same era) I had a browser extension that let people view YouTube without flash by swapping the flash embed for a direct video embed. Was faster and cleaner than the flash-based UI.
Maybe you would like this one https://github.com/thisdotvoid/youtube-classic-extension
I’d say text-as-images and text-as-Flash from the pre-webfont era are abuses too.
On top of that, nobody’s twisting anybody’s arm to visit “heavy” sites like CNN. If CNN loads too much crap, visit a lighter site.
Or just use http://lite.cnn.io
nobody’s twisting anybody’s arm to visit “heavy” sites like CNN
Exactly. It’s not a “web developers are making the web bloated” problem, it’s a “news organizations are desperate to make money and are convinced that personalized advertising and tons of statistics (Big Data!!) will help them” problem.
Lobsters is light, HN, MetaFilter, Reddit, GitHub, GitLab, personal sites/blogs, various wikis, forums, issue trackers, control panels… Most of the stuff I use is really not bloated.
If you’re reading general world news all day… stop :)
I’ve been reading and upvoting rants about website bloat for several years now, based on scores so do many others, and yet every year it gets worse. It seems like the opinion of users on this kind of site is simply insufficient to change how websites are built, which is a bit surprising since people who build websites are presumably part of the core audience.
I’m resigned at this point. The call to action tacked on to the end of the long list of complaints about modern web development practices feels profoundly empty. Even if every developer at Hill, Politico, and CNN read and wholeheartedly agreed with the sentiment here, the people who actually make the relevant decisions won’t, and even if they did I expect they wouldn’t care. The politics of large organizations make it a lot easier to sell the idea of more advertising or some slick animated thing you can show off in a meeting than a shorter waterfall chart or “authenticity” (presumably the opposite of “bullshit” per the definition offered in this article). The CNNs of the world are going to continue to get worse, there’s no stopping it, you can opt out or try to buy sufficiently good connectivity/hardware to mitigate it but you’re not going to write blog posts to win over the hearts and minds of those who can actually do something about it.
what has failed is individuals resisting in isolation. what we haven’t tried is a unified movement.
Your second paragraph nails it. I bet most people here agree with these rants; but most of the people paying their salaries don’t care.
“We understand that working in our field is a privilege, not a right.” <- This is bullshit. Anyone at all has the right to program, and attempt to get other people to pay them for programing.
I suspect that by “working” they mean “being paid to work …”, not “attempting to get paid to work …”, and that is certainly not a right.
Careful, you are replacing one badly worded statement with another.
It is a right to attempt to get paid.
But unless you have satisfied a mutually agreed contract, it is not a right that your attempts will succeed.
I think it’s ironic that participation in the field is described as a privilege rather than a right but later we get
We must make room for people who are not like us to enter our field and succeed there
How on earth can everyone be obligated to be inclusive if no one has a right to be included?
Counterpoint: many people in the industry are “have been programming since elementary school” types. Having the resources for that (usually your own computer to play with, + sometimes access to classes and stuff) wasn’t a given.
Even those who go out of their way to do stuff like coding camps get shunned by the “truly passionate”.
It’s not exactly the original point but accepting people from different backgrounds is important I think.
(As an aside: this is also present in the “show us your GitHub profile” talk. Maybe you have kids and simply don’t have that sort of bandwidth. Or don’t care about making your own web framework?)
Is the pace of software development really that break-neck? My day job is front end web dev, which seems to be what people complain about the most, and in the last 5 years the only real piece of learning I had to do was setting aside a few days to pick up react. Sure testing frameworks, build tools, loading/bundling/compression strategies have come and gone but it’s not like the changes have really brought anything new at the API level that I needed to worry about as a working programmer.
And despite popular opinion, web development isn’t the world. I can think of at least two programming languages I use that my parents would have used at the start of their careers. The average book on computer science or software development book on my bookshelf is older than me, as is much of the software I use on a daily basis (counting from initial release).
I think a lot of people work themselves into a frenzy over the continuous stream of new software tools/practices/whatever they see on HN or lobsters or wherever when the reality is the amount of “keeping up to date” you really have to do is pretty minimal.
Comparing programming to, say, woodworking (another interest of mine, although I do very little woodworking). Comparing the “state of the art” wood working tools from 50 years ago to today, there’s not much that is new. About the only thing I can think of that a woodworker from the late 60s wouldn’t believe are the CNC machines readily available. Even a woodworker from the 1700s would find the motorized tools familiar (if not outright wonderous, but there are hand versions of nearly everything one can find in a modernized wood working shop).
Meanwhile, the “state of the art” computers from the late 60s and today are incomparable. We went from single digit MHz clock rates to single digit GHz clock rates (a thousand fold increase), from memory counted in single digit kilobytes to double digit gigabytes (nearly a hundred thousand increase) and disk storage from possibly triple digit kilobytes to single digit terabytes (about a million-fold I think). The approach you take to a program when you have a 4MHz CPU, 64K RAM and 160K disk is vastly different from a 4GHz CPU, 64G of RAM and a terabyte of diskspace (and in come cases, there are programs you can do that would have been impossible back in the day).
I’m looking over the books that are nearby, and let’s see … I see a few CP/M books (good for historical context but little else these days), The Programmer’s Sourcebook for the PC which was maybe even relevant 15 years ago (and today about as useful as the CP/M books). Oh, there are a few books on the Amiga (brings back fond memories of 25 years ago), OS/2 1.0 (again, historically interesting for a potential rival to Unix back in the day) and a book on the VAX architecture (nice but no longer mainstream as it once was).
To some degree, yes, there is very little that is actually new in the computer industry. Over the past 34 years I’ve seen fads come and go and in one way, it’s the same thing all over again but in another way, it is (annoyingly) different. There are things we can do now that we couldn’t do then and in some ways, things have finally slowed down over the past decade (at least hardware wise—computers haven’t gotten significantly faster or even more capable really) but not everything. The only language I learned in college that I still use is C and even that is becoming more and more unpopular as time goes on (the other languages I learned in college were Fortran, Pascal, Lisp, Prolog and Ada; I can count on one finger the number of times I used those languages and still have a finger left over, but I’m not upset at learning them).
I think it comes down to, if someones reading your code, they’re trying to fix a bug, or some other wise trying to understand what it’s doing. Oddly, a single, large file of sphaghetti code, the antithesis of everything we as developers strive to do, can often be easier to understand that finely crafted object oriented systems. I find I would much rather trace though a single source file than sift through files and directories of the interfaces, abstract classes, factories of the sort many architect nowadays. Maybe I have been in Java land for too long?
This is exactly the sentiment behind schlub. :)
Anyways, I think you nail it on the head: if I’m reading somebody’s code, I’m probably trying to fix something.
Leaving all of the guts out semi-neatly arranged and with obvious toolmarks (say, copy and pasted blocks, little comments saying what is up if nonobvious, straightforward language constructs instead of clever library usage) makes life a lot easier.
It’s kind of like working on old cars or industrial equipment: things are larger and messier, but they’re also built with humans in mind. A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch–this is similar to how new cars are all built with heavy expectation that either robots assemble them or that parts will be thrown out as a unit instead of being repaired in situ.
You two must be incredibly skilled if you can wade through spaghetti code (at least the kind I have encountered in my admittedly meager experience) and prefer it to helper function calls. I very much prefer being able to consider a single small issue in isolation, which is what I tend to use helper functions for.
However, a middle ground does exist, namely using scoping blocks to separate out code that does a single step in a longer algorithm. It has some great advantages: it doesn’t pollute the available names in the surrounding function as badly, and if turned into an inline function can be invoked at different stages in the larger function if need be.
The best example of this I can think of is Jonathan Blow’s Jai language. It allows many incremental differences between “scope delimited block” and “full function”, including a block with arguments that can’t implicitly access variables outside of the block. It sounds like a great solution to both the difficulty of finding where a function is declared and the difficulty in thinking about an isolated task at a time.
It’s a skill that becomes easier as you do it, admittedly. When dealing with spaghetti, you only have to be as smart as the person who wrote it, which is usually not very smart :D.
As others have noted, where many fail is too much abstraction, too many layers of indirection. My all time worst experience was 20 method calls deep to find where the code actually did something. And this was not including many meaningless branches that did nothing. I actually wrote them all down on that occasion for proof of the absurdity.
The other thing that kills when working with others code is the functions/methods that don’t do what they’re named. I’ve personally wasted many hours debugging because I skipped over the funtion that mutated that data it shouldn’t have, judging from it’s name. Pro tip; check everything.
Or you can record what lines of code are actually executed. I’ve done that for Lua to see what the code was doing (and using the results to guide some optimizations).
Well, I wouldn’t say “incredibly skilled” so much as “stubborn and simple-minded”–at least in my case.
When doing debugging, it’s easiest to step through iterative changes in program state, right? Like, at the end of the day, there is no substitute for single-stepping through program logic and watching the state of memory. That will always get you the ground truth, regardless of assumptions (barring certain weird caching bugs, other weird stuff…).
Helper functions tend to obscure overall code flow since their point is abstraction. For organizing code, for extending things, abstraction is great. But the computer is just advancing a program counter, fiddling with memory or stack, and comparing and branching. When debugging (instead of developing), you need to mimic the computer and step through exactly what it’s doing, and so abstraction is actually a hindrance.
Additionally, people tend to do things like reuse abstractions across unrelated modules (say, for formatting a price or something), and while that is very handy it does mean that a “fix” in one place can suddenly start breaking things elsewhere or instrumentation (ye olde printf debugging) can end up with a bunch of extra noise. One of the first things you see people do for fixes in the wild is to duplicate the shared utility function, and append a hack or 2 or Fixed or Ex to the function name and patch and use the new version in their code they’re fixing!
I do agree with you generally, and I don’t mean to imply we should compile everything into one gigantic source file (screw you, JS concatenators!).
I find debugging much easier with short functions than stepping through imperative code. If each function is just 3 lines that make sense in the domain, I can step through those and see which is returning the wrong value, and then I can drop frame and step into that function and repeat, and find the problem really quickly - the function decomposition I already have in my program is effectively doing my bisection for me. Longer functions make that workflow slower, and programming styles that break “drop frame” by modifying some hidden state mean I have to fall back to something much slower.
I absolutely agree with you that when debugging, it boils down to looking and seeing, step by step, what the problem is. I also wasn’t under the impression that you think that helper functions are unnecessary in every case, don’t worry.
However, when debugging, I still prefer helper functions. I think it’s that the name of the function will help me figure out what that code block is supposed to be doing, and then a fix should be more obvious because of that. It also allows narrowing down of an error into a smaller space; if your call to this helper doesn’t give you the right return, then the problem is in the helper, and you just reduced the possible amount of code that could be interacting to create the error; rinse and repeat until you get to the level that the actual problematic code is at.
Sure, a layer of indirection may kick you out of the current context of that function call and perhaps out of the relevant interacting section of the code, but being able to narrow down a problem into “this section of code that is pretty much isolated and is supposed to be performing something, but it’s not” helps me enormously to figure out issues. Of course, this only works if the helper functions are extremely granular, focused, and well named, all of which is infamously difficult to get right. C’est la vie.
Anyways, you can do that with a comment and a block to limit scope, which is why I think that Blow’s idea about adding more scoping features is a brilliant one.
On an unrelated note, the bug fixes where a particular entity is just copied and then a version number or what have you is appended hits way too close to home. I have to deal with that constantly. However, I am struggling to think of a situation where just patching the helper isn’t the correct thing to do. If a function is supposed to do something, and it’s not, why make a copy and fix it there? That makes no sense to me.
It’s a balance. At work, there’s a codebase where the main loop is already five function calls deep, and the actual guts, the code that does the actual work, is another ten function calls deep (and this isn’t Java! It’s C!). I’m serious. The developer loves to hide the implementation of the program from itself (“I’m not distracted by extraneous detail! My code is crystal clear!”). It makes it so much fun to figure out what happens exactly where.
A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch
I do quick patches in Haskell all the time.
Ill add that one of the motivations of improved structure (eg functional prigramming) is to make it easier to do those patches. Especially anything bringing extra modularity or isolation of side effects.
I think it’s a case of OO in theory and OO as dogma. I’ve worked in fairly object oriented codebases where the class structure really was useful in understanding the code, classes had the responsibilities their names implied and those responsibilities pertained to the problem the total system was trying to solve (i.e. no abstract bean factories, no business or OSS effort has ever had a fundamental need for bean factories).
But of course the opposite scenario has been far more common in my experience, endless hierarchies of helpers, factories, delegates, and strategies, pretty much anything and everything to sweep the actual business logic of the program into some remote corner of the code base, wholly detached from its actual application in the system.
I’ve seen bad code with too many small functions and bad code with god functions. I agree that conventional wisdom (especially in the Java community) pushes people towards too many small functions at this point. By the way, John Carmack discusses this in an old email about functional programming stuff.
Another thought: tooling can affect style preferences. When I was doing a lot of Python, I noticed that I could sometimes tell whether someone used IntelliJ (an IDE) or a bare bones text editor based on how they structured their code. IDE people tended (not an iron law by any means) towards more, smaller files, which I hypothesized was a result of being able to go-to definition more easily. Vim / Emacs people tended instead to lump things into a single file, probably because both editors make scrolling to lines so easy. Relating this back to Java, it’s possible that everyone (with a few exceptions) in Java land using heavyweight IDEs (and also because Java requires one-class-per-file), there’s a bias towards smaller files.
Yes, vim also makes it easy to look at different parts of the same buffer at the same time, which makes big files comfortable to use. And vice versa, many small files are manageable, but more cumbersome in vim.
I miss the functionality of looking at different parts of the same file in many IDEs.
Sometimes we break things apart to make them interchangeable, which can make the parts easier to reason about, but can make their role in the whole harder to grok, depending on what methods are used to wire them back together. The more magic in the re-assembly, the harder it will be to understand by looking at application source alone. Tooling can help make up for disconnects foisted on us in the name of flexibility or unit testing.
Sometimes we break things apart simply to name / document individual chunks of code, either because of their position in a longer ordered sequence of steps, or because they deal with a specific sub-set of domain or platform concerns. These breaks are really in response to the limitations of storing source in 1-dimensional strings with (at best) a single hierarchy of files as the organising principle. Ideally we would be able to view units of code in a collection either by their area-of-interest in the business domain (say, customer orders) or platform domain (database serialisation). But with a single hierarchy, and no first-class implementation of tagging or the like, we’re forced to choose one.
Storing our code in files is a vestige of the 20th century. There’s no good reason that code needs to be organized into text files in directories. What we need is a uniform API for exploring the code. Files in a directory hierarchy is merely one possible way to do this. It happens to be a very familiar and widespread one but by no means the only viable one. Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway. We could just store that on disk as a single structured binary file with a library for reading and modifying it.
Yes! There are so many more ways of analysis and presentation possible without the shackles of text files. To give a very simple example, I’d love to be able to substitute function calls with their bodies when looking at a given function - then repeat for the next level if it wasn’t enough etc. Or see the bodies of all the functions which call a given function in a single view, on demand, without jumping between files. Or even just reorder the set of functions I’m looking at. I haven’t encountered any tools that would let me do it.
Some things are possible to implement on top of text files, but I’m pretty sure it’s only a subset, and the implementation is needlessly complicated.
IIRC, the s-expr style that Lisp is written in was originally meant to be the AST-like form used internally. The original plan was to build a more suggared syntax over it. But people got used to writing the s-exprs directly.
Exactly this, some binary representation would presumably be the AST in some form, which lisp s-expressions are, serialized/deserialized to text. Specifically
It happens to be a very familiar and widespread one but by no means the only viable one.
Xml editors come to mind that provide a tree view of the data, as one possible alternative editor. I personally would not call this viable, certainly not desirable. Perhaps you have in mind other graphical programming environments, I haven’t found any (that I’ve tried) to be useable for real work. Maybe you have something specific in mind? Excel?
Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway
The resulting parse can depend on the environment in many languages. For example the C preprocessor can generate vastly different code depending on how system variables are defined. This is desirable behavior for os/system level programs. The point here is that in at least this case the source actually encodes several different programs or versions of programs, not just one.
My experience with this notion that text is somehow not desireable for programs is colored by using visual environments like Alice, or trying to coerce gui builders to get the layout I want. Text really is easier than fighting arbitrary tools. Plus, any non text representation would have to solve diffing and merging for version control. Tree diffing is a much harder problem than diffing text.
People who decry text would have much more credibility with me, if they addressed these types of issues.
That’s literally true! I am work with some of the old code and things are really easy. There are lots of files but all are divided into such an easy way.
On the other hand, new project that is divided into lots of tier with strick guidelines, it become hard form me to just find a line from where bug occur
I think “lisper” on the HN version of this article had a great summary:
“What’s really going on is that, in Lisp, code is a particular kind of data, specifically, it’s a tree rather than a string. Therefore, some (but not all) of the program’s structure is represented directly in the data structure in which it is represented, and that makes certain kinds of manipulations on code easier, and it makes other kinds of manipulations harder or even impossible. But (and this is the key point) the kinds of manipulations that are easier are the kind you actually want to do in general, and the kind that are harder or impossible are less useful. The reason for this is that the tree organizes the program into pieces that are (mostly) semantically meaningful, whereas representing the program as a string doesn’t. It’s the exact same phenomenon that makes it easier to manipulate HTML correctly using a DOM rather than with regular expressions.”
I don’t think tree vs string is the difference. For all languages, it is a string of bytes on disk and a tree once parsed in memory. Lisp just has a closer correspondence between tree and string, which makes it cognitively easier. I don’t know where somebody would draw the line that they are considered “homo”, equal.
I think at the point that the tree-literal syntax is the same as the language-proper syntax is a pretty good point to consider it equal. You can’t express arbitrary javascript programs in JSON, or any other C-family languages code in data-literal syntax. The lisp family however uses the same syntax for representing data as it does for representing its program.
Lisp just has a closer correspondence between tree and string, which makes it cognitively easier
Maybe not just cognitively but also in terms of programming. Many languages have complicated parsing and contexts where transformations aren’t as simple as an operation on a tree with regularity in its syntax and semantics.
Right. Von Neumann machine code is homoiconic, but I don’t think it exhibits many of the purported advantages of Lisp?
The one’s Ive seen are definitely harder than LISP’s to work. Now, might be different if we’re talking a Scheme CPU, esp if microcoded or PALcoded. With SHard, it can even be built in Scheme. :)
I think there are plenty of fair criticisms of “tailored” search strategies but I’m not convinced they’re underperforming with respect to where older strategies would be today. I use duckduckgo about as often as google, it’s acceptable and in tandem with not being google that’s enough to make me use it, but it’s really only once in a blue moon where it manages to outperform google at anything.
This. I would gladly pay a one time fee to license software. If my organization can stand up a server for our product we should be able to put up another one for logging/analytics/whatever else gets shoehorned into being a third party service. It shouldn’t be rocket science to sell a piece of software with an install script, or heroku procfile, or whatever config looks like for AWS.
Hmm, I dunno. I mean, sure having a way to buy the software for those who self-host is great. But I would not always prefer it for a lot of non-core stuff.
I get this argument for something like New Relic, which is really actually expensive (like $100 per machine). But operations work is pretty annoying, and having to not manage it is pretty great.
We used to be running a self-hosted Sentry instance. And it was basically fine! Worked as intended. But because it got real use, we needed to maintain it. So we moved over to their hosting and pay the $30/month to not deal with this. And get updates and all the goodness.
I have to admit, I was very much in agreement with the author while reading the article. I thought the quoted email about the examples being so distracting that nothing could be learned from the book was simply melodramatic. But on finishing the article I realized my politics align with the author’s on every point.
Based on the up votes it seems this article was well received. A challenge to readers who responded positively to this article: imagine Feuerstein had picked a different set of targets, say he derided victims of police brutality as criminals deserving what they got or accused unions of massive corruption and suggested they ought to be illegal. Would you still support his choice of examples under the same principle? I don’t find the examples nearly as cute when they don’t affirm my opinions, and would be tempted to say they don’t belong in a technical book.
Indeed. I also found the OP’s take on “Americans not wanting to talk politics” to be quite a bit off, at least for me. It’s not really that I don’t want to talk politics. I used to do it a lot. But I got sick of the consequences of voicing my opinions. My own particular views tend to disagree with just about everyone else’s, so it just winds up creating a ton of friction and I end up losing a ton of time to it. One day, I realized that I didn’t want to spend my life that way (I’m certainly not going to change anyone’s mind) so I just mostly shut up about it. Things have been much better.
If I had read the OP’s book, I don’t think I would have been so distracted as to be incapable of learning from them. But I certainly would be skeptical of any future work put out by the OP. The views themselves wouldn’t really bother me, even though I probably disagree with many of them in various ways. (If they did, I’d have a really hard time living in Massachusetts and working in Cambridge.) What would bother me is the fact that the author was so confident in their own opinions, that they would use them in a descriptive context as if they were facts.
N.B. I use the word “politics” here to loosely mean “the discussion of current events in the context of government behavior” or “a controversial topic that divides political party lines” or something like that. I don’t use in the sense of “literally any action, including inaction.” (The latter interpretation exposes a trivial contradiction in the claim that I “mostly shut up about politics.”)
Note that this was written in 2000, before a lot of the trends that have lead people in Anglophone countries to be more vocally political started happening.
I don’t like unions and I do like guns and I’m very uninterested in being preached to about these things by tech people via a medium that doesn’t allow discussion. I don’t think I’d literally be unable to learn from reading the examples, I just think I would get a different book after a few of them or avoid this book if I knew about them going into things.
I’m inclined to upvote things like this that I think are likely to provoke interesting conversations though for the record.
via a medium that doesn’t allow discussion
Well put. I don’t object to writing that mixes up politics and technology, as such. But smuggling in factoids without bothering to set up an argument is just poor form. It cheapens or even precludes the debate. I don’t care what the issue is or what side you’re on: if you care enough to write about it, you should care enough to write directly, show evidence, and build an argument. Little potshots crammed into code examples don’t help make a case; it merely shows a casual disrespect for the reader.
My politics, too, align with those of the author.
imagine Feuerstein had picked a different set of targets, say he derided victims of police brutality as criminals deserving what they got […] Would you still support his choice of examples under the same principle?
Same as with when encountering any political speech: support everyone’s freedom to choose their political viewpoints and promote them [1], but only actually support politics that make the world better. Argue and rebut where you think it’s important to and/or effective. A.k.a. don’t ban the book, but that doesn’t mean you have to buy it or like it.
AFAICT that’s what most people who disagreed with the actual book did, too.
[1] With the usual dishonourable exception for politics intolerant of other viewpoints, or even entire peoples’ rights. I don’t advocate repressing their speech at every turn, simply because that is not the most effective way to suppress the school of thought, but a tolerant society must reserve that right to prevent the intolerant from abusing it and taking over. See the Paradox of tolerance.
support for Python 2.7 is removed
That’s a shame. I do definitely appreciate Django’s relatively long LTS support timeframes and the work that has to go into that but it’s unfortunate I’ll have to make a decision between the py2 line and Django in 2020.
I’m sorry but the only shame is that it took so long. Supporting obsolete versions is expensive, and the decision to move off Python 2 has been made, what, 10 years ago? I don’t know why people insisting on supporting Python 2 expect the entire community to pay the price. Still.
I don’t know why people insisting on supporting Python 2
At least in my case it’s because I don’t view Python 3 as a net gain with respect to Py2. People are still starting projects in Py2 despite library support for Py3 having been solid for years, migration costs are pretty low (you can migrate a fairly large codebase in a weekend). It seems like there’s pretty good evidence that the reason Py3 adoption is slow is because people don’t particularly want to work in Py3, so it’s natural that major libraries dropping support for Py2 isn’t happy news: if forces us to use something we otherwise wouldn’t.
I disagree with the idea that there’s anything technological that distinguishes BBSs from the modern social media landscape. BBSs run on fundamentally the same model as web forums which are, collectively, fairing poorly in the Facebook era. Forums and BBSs both are fundamentally centralized (there’s the possibility for federation but in practice you don’t really see it), pseudonymous, and cheap to start or access. The last two attributes which aren’t shared with Facebook, as the last ~10 years have shown, aren’t compelling enough to draw a significant user base away from FB. While there are a lot of things that go into a successful BBS users are the table stakes.
A successful modern BBS doesn’t need more software, it needs users that see value in it.
I think there is something they have other things in common as well. They are functional, rather than fancy. Facebook got its hype without ever looking modern. Some forums had the same happening to them. Facebook alternatives largely focused on other areas than being practical.
Similar things could be said about Gmail, Google, eBay, Craigslist and Amazon as well as many chat platforms.
I don’t think you are wrong at all, in fact I agree, I have my doubts about the reason for Facebook being successful is real names. However I do agree, that the lack of some form of centralization or the lack of really simple federation is a reason.
Another thing is that BBSs and Facebook are rather different from Facebook, when it comes to the intent and structure of users sending content. Facebooks only structure is Comments and Time. There is nothing like “off topic”, really. There is also no moderator watching over things outside of their Groups feature. Also forums and BBSs while in general allowing for such things, don’t really make it a normal action to decide on what you read (follow) or plan events. Facebook and many other social networking sites focus on human interaction/communication over everything, while BBSs and forums focus on topics in strong hierarchy. You have a forum/BBS about the BSDs, then you have subsections about the individual BSDs, each having something about the base system, applications, hardware, programming, news, you have an Off Topic general talk section, something about meeting, something about jobs, in each of those you have threads on more specific topics/problems. You have a search functionality to use this kind of hierarchy and so on.
I think Facebook benefits from it basically being a completely open way to distract yourself. People write about random happenings, and people expect to read rather random parts.
It will always show you more, if you want, because that’s what those networks aim for (keeping users busy on their website). This is not what your average forum aims for. You usually arrive looking for information and write to ask or inform.
Of course that’s not true for every forum out there, but even though it might not really be big technical difference it certainly is a difference in use case. It probably is also why many alternatives didn’t get there. They often were topic heavy, sometimes even on accident. For example alternative that came up for privacy reasons had privacy as a very dominant topic. There was a big fraction of Linux/BSD/… users or at least very tech savvy people. So random strolling there never was really random.
At least in the Python case (and I suspect, but don’t know, in the Apache case) there’s a “philosophical” reason for the design. Sure it would take a lot of elbow grease to get rid of the GIL in CPython but as far as I know, no one has ever argued it’s impossible, the standard line is that there’s no particularly good reason to. Python’s answer to parallelism is that it’s an OS level problem, which is a much more reasonable answer than people seem to give it credit for. OS designers have been grappling with the problem for longer than most (any?) language designers. It’s not “rot” to draw different system boundaries than competitors.
I can’t think of any mainstream programming language that attempts to wrest disk IO away from the OS, so I don’t get why it’s seen as a mortal flaw when Python does the same thing with multi-core utilization.
the standard line is that there’s no particularly good reason to
Without being rude, this is false. The problems with the GIL are well-known, and they’re not like you say. In fact there’s yet another attempt to get rid of the GIL by python-dev@ as of the last PyCon (GILectomy).
It’s not possible to get rid of the GIL, while
If you relax one of those assumptions, you can get rid of the GIL, but then there would be no point.
The original post is correct in the sense that an architectural decision that was made at the beginning of Python’s life that is hard to change. There have been multiple, serious attempts to get rid of the GIL over the last 25 years.
Cool visualization. If like me you don’t want to dig through a huge image to find your favorite distro you can use ctrl/cmd+F since it’s an SVG, which I greatly appreciated.