I’ve been looking for a visual language for a while. There have been several attempts, but like most languages, they are not mainstream. What are very effective and widely used are Simulink and LabVIEW both of which have core applications in signal processing/control systems problems.
What drew me to Luna was the promise of going smoothly (and reversibly) between textual and visual representations. So, I went to give luna a spin.
In summary, for my 30min experiment with it, I see this as a alpha release with the interesting concept that you can go back and forth between visual and text based coding, but the language itself is kind of underwhelming.
I would have been very excited if this kind of effort had been put into a similar visual paradigm for Python.
As much as your overall impression of the alpha-quality of the whole “develeopment environment” is correct, I wouldn’t be so quick as to assume this must induce that the language is underwhelming. And I don’t see any concrete arguments/critique regarding Luna as a language in your post!
Personally, I see Luna currently as an “early adopters”-stage technology, a.k.a. “bleeding edge”. Presence of the “bleeding” adjective in this phrase is telling. I totally admit it’s not “production ready”, but I strongly believe it’s noteworthy and has future, and I’m very excited to already have access to it, at this early phase of development. (More specifically, personally I believe it will be revolutionary.)
Hi, that’s because nothing in the language concepts stood out. Could you note what are the distinguishing characteristics? Thanks
Ok, I think I better understand your message now.
As far as I understand, the duality is what is the main “distinguishing characteristic”. Other than that, I believe the language (in its textual part) is not claimed to be innovative indeed; but I don’t really know a lot about its design, and haven’t seen any articles, so not sure, and I’m not a PL scientist/theorist. Only know that it’s statically typed. And AFAIU the authors aim for readability.
However, as for “underwhelming”, were you expecting something special that you then missed?
Hi,
I generally don’t see value in developing new languages for their own sake. This is of course separate from the pleasure it gives the developer.
From a glance at the language I wondered if it would have been more impactful to take say Haskell or a constrained subset of Racket or Python and build a tool around it that allowed reversible visual/textual programming.
My understanding is that the authors believed that the two representations must be developed “in concert”, so that each of them would make sense when viewed separately, and for the “mirror editing” (graphical vs. textual) to be feasible. Also, from what I heard from them, I believe they’re big fans of Haskell personally (that’s what they used as implementation language), but wanted the Luna textual language to be more approachable — aiming for something resembling Python on surface and in ergonomics, but much more Haskell-y (or at least FP) in spirit and semantics. They seemed especially fond of the semantics and appearance of the “dot operator”.
So, I believe in their view creating a new language (on the textual front) was not “for its own sake”, but rather the only feasible way, when the goal was to have the feature of “duality”. So for me, looking at the (textual) language purely separately, ignoring the feature of duality, doesn’t make much sense. Though OTOH, now I’m starting to understand such a view is at all possible, especially if someone feels not interested in the visual part.
Hi, Thank you for the detailed responses. I would have thought Haskell’s purity and strong type system would be especially suited to the visual paradigm being pursued here.
Research gate and academia.edu have similar aims and are closed source, I think. Both have had trouble keeping the lights on.
Academia and research gate are more like Facebook for academics where you post your papers instead of cute pictures of your cat. I don’t think there is much discussion about the papers there.
Source: I have an account on both websites.
I would agree on this. I’ve been using these services for a long time and I don’t remember any case in RG that I had a discussion about a paper. In my field, it’s always experimental questions. Not the papers.
I am a maths researcher at the university of Cologne and adressed this in a thesis I wrote in 2016. See chapter 3, especially the first part of section 3.1.
Dividing by zero is totally well defined for the projectively extended real numbers (only one unsigned infinity inf) but the argument for the usual extended real numbers (+-inf) not working is not based on field theory, but of infinitisemal nature, given you can approach a zero-division both from below and above and get either +inf or-inf equally likely.
Defining 1/0=0 not only breaks this infinitiseminal form, it‘s also radically counterintuïtive given how the values behave when you approach the division from small numbers, e.g. 1/10, 1/1, 1/0.1, 1/0.001…
lim x->0 1/x = 0 makes no sense and is wrong in terms of limits.
See the thesis where I proved a/0=inf to be well-defined for a!=0.
tl;dr: There‘s more to this than satisfying the field conditions. If you redefine division, this has consequences on higher levels, in this case most prominently in infinitisemal analysis.
I used to be a maths researcher, and would just like to point out that some of the people who define division by zero to mean infinity do it because they’re more interested in the geometric properties of the spaces that functions are defined on than the functions themselves. This is the reason for the Riemann sphere in complex analysis, where geometers really like compact spaces more than noncompact ones, so they’re fine with throwing away the field property of the complex numbers. The moment any of them need to compute things, however, they pick local coordinates where division by zero doesn’t happen and use the normal tools of analysis.
Thanks for laying this out and pointing out the issue with +/- Inf
Could you summarize here why +Inf is a good choice. As a practical man I approach this from the limit standpoint - usually when I end up with a situation like this it’s because the correct answer is +/- Inf and it depends on the context which one it should be. Here context means on which side of zero was my history of the denominator.
The issue is that the function 1/x has a discontinuity at 0. I was taught that this means 1/0 is “undefined”. IMO in code this means throw an exception.
In practical terms I end up adding a tiny number to the denominator (e.g. 1e-10) and continuing, but that implicitly means I’m biased to the positive side of line.
I think Pony’s approach is flat out wrong.
It is not +inf, but inf. For the projectively extended real numbers, we only extend the set with one infinite element which has no sign. Take a look at page 18 of the thesis which includes an illustration of this. Rather than having a number line we have a number circle.
Dividing by zero, the direction we approach the denominator does not matter, even if we oscillate around zero, given it all ends up in one single point of infinity. We really don’t limit ourselves here with hat as we can express a limit to +inf or -inf in the traditional real number extension by the direction from which we approach inf in the projectively extended real numbers (see remark 3.5 on page 19).
1/x is discontinuous at 0, this is true, but we can always look at limits. :) I am also a practical man and hope this relatively formal way I used to describe it did not distract from the relatively simple idea behind this.
Pony’s approach is reasonable within field theory, but it’s not really useful when almost the entire analytical building on top of it collapses on your head. NaN was invented for a reason and given the IEEE floating-point numbers use the traditional +-inf extension, they should just return the indeterminate form on division by zero in Pony.
NaN only exists for floating point, not integers. If you want to use NaN or something like it for integers, you will need to box all integer numbers and take a large performance hit.
Just curious, but why isn’t 1/0=1? Would 1/0=Inf not require that infinity exists between 0 and 1?
One day we will type in a script and the computer will create a movie for us. That day is not today.
I…that day might be sooner than we think! In fact, it could already be possible if we use this as a primitive.
Each description is a frame, or maybe better, a “section” of a scene. Then these are interpolated to create scenes transforming from one to another.
I’m trying to convince my workplace to get rid of whiteboarding interviews, does anyone know if there are resources for ideas of alternatives? Anyone have a creative non-whiteboarding interview they’d like to share?
The best that I’ve found is to just ask them to explain some tech that’s listed on their resume. You’ll really quickly be able to tell if its something they understand or not.
My team does basic networking related stuff and my first question for anyone that lists experience with network protocols is to ask them to explain the difference between TCP and UDP. A surprising number of people really flounder on that despite listing 5+ years of implementing network protocols.
This is what I’ve done too. Every developer I’ve ever interviewed, we kept the conversation to 30min-1hr and very conversational. A few questions about, say, Angular if it was listed on their resume, but not questions without any context. It would usually be like- “so what projects are you working on right now? Oh, interesting, how are you solving state management?” etc. Then I could relate that to a project we currently had at work so they could get a sense of what the work would be like. The rapid-fire technical questions I’ve find are quite off-putting to candidates (and off-putting to me when I’ve been asked them like that).
As a side note, any company that interviews me in this conversational style (a conversation like a real human being) automatically gets pushed to the top of my list.
Seconded. Soft interviewing can go a long way. “You put Ada and Assembler on your CV? Oh, you just read about Ada once and you can’t remember which architecture you wrote your assembly for?”
I often flunk questions like that on things I know. This is because a question like that comes without context. If such a problem comes up when I’m building something, I have the context and then I remember.
I don’t think any networking specialist would not know the difference between TCP and UDP, though. That sounds like a pretty clear case of someone embellishing their CV.
So if you can’t whiteboard and you can’t talk about your experience, what options are left? Crystal ball?
I like work examples, open ended coding challenges: Here’s a problem, work on it when you like, how you like, come back in a week and lets discuss the solution. We’ve crafted the problem to match our domain of work.
In an interview I also look out for signs of hostility on the part of the interviewer, suggesting that may not be a good place for me to work.
A sample of actual work expected of the prospective employee is fair. There are pros and cons to whether it should be given ahead of time or only shown there, but I lean towards giving it out in advance of the interview and having the candidate talk it through.
Note that this can be a hard sell, as it requires humility on the part of the individual and the institution. If your organization supports an e-commerce platform, you probably don’t get to quiz people on quicksort’s worst-case algorithmic complexity.
I certainly don’t have code just sitting around I could call a sample of actual work. The software I write for myself isn’t written in the way I’d write software for someone else. I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun. The code I’ve written for work is the intellectual and physical copy of my previous employers, and I couldn’t present a sample even if I had access to it, which I don’t.
Yup, the code I write for myself is either 1) something quick and ugly just to solve a problem 2) me learning a new language or API. The latter is usually a bunch of basic exercises. Neither really show my skills in a meaningful way. Maybe I shouldn’t just throw things on GitHub for the hell of it.
Oh, I think you misinterpreted me. I want the employer to give the employee some sample work to do ahead of time, and then talk to it in person.
As you said, unfortunately, the portfolio approach is more difficult for many people.
I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun.
Perhaps in the future we will see people taking on side projects specifically in order to get the attention of prospective employers.
I recently went through a week of interviewing as the conclusion of the Triplebyte process, and I ended up enjoying 3 of the 4 interviews. There were going to be 5, but there was a scheduling issue on the company’s part. The one I didn’t enjoy involved white board coding. I’ll tell you about the other three.
To put all of this into perspective, I’m a junior engineer with no experience outside of internships, which I imagine puts me into the “relatively easy to interview” bucket, but maybe that’s just my perception.
The first one actually involved no coding whatsoever, which surprised me going in. Of the three technical interviews, two were systems design questions. Structured well, I enjoy these types of questions. Start with the high level description of what’s to be accomplished, come up with the initial design as if there was no load or tricky features to worry about, then add stresses to the problem. Higher volume. New features. New requirements. Dive into the parts that you understand well, talk about how you’d find the right answer for areas you don’t understand as deeply. The other question was a coding design question, centered around data structures and algorithms you’d use to implement a complex, non-distributed application.
The other two companies each had a design question as well, but each also included two coding questions. One company had a laptop prepared for me to use to code up a solution to the problem, and the other had me bring my own computer to solve the questions. In each case, the problem was solvable in an hour, including tests, but getting it to the point of being fully production ready wasn’t feasible, so there was room to stretch.
By the time I got to the fourth company and actually had to write code with a marker on a whiteboard I was shocked at how uncomfortable it felt in comparison. One of my interviews was pretty hostile, which didn’t help at all, but still, there are many, far better alternatives.
I’m a little surprised that they asked you systems design questions, since I’ve been generally advised not to do that to people with little experience. But it sounds like you enjoyed those?
There are extensive resources to help with the evangelism side of things.
This is an interesting device in the era of smart phones. This is aimed at college students for particular exams. Are they not allowed to have their smart phones in these exams? A smart phone app selling for $5 would have been much more cost effective. Instead of using a phone that has already been paid for, now they have to pay for specialized hardware that costs $100 - more than some smart phones.
Are they not allowed to have their smart phones in these exams?
Generally they aren’t, as the networking would make it very easy to cheat.
In my experience, the good calculator apps on phones are simply emulators of physical calculators. There’s always wolfram alpha, but I haven’t seen a good native calculator app.
That being said, I enjoy using physical calculators much better than emulators. There’s something about a device which is designed for one purpose and doesn’t have to make compromises…
You aren’t forced to use the new c++ features. You can continue to write as you have always written. I think the core complaint is that OTHER people are now writing c++ that looks like - HORROR! - Python. If this was the 1980s and the only way to learn C++ was to pore over a book from end-to-end, ok. But it’s the 21st century! We have the internet! There are online references which have example code snippets to go with each C++ concept or standard library class. A quick web search will get you any concept that you like.
Taking the example used in the article, I LOVE the range based for loops. It makes it less text to read. Don’t like auto, fine, use explicit typing. A modern IDE will still do code intelligence for you on auto variables, don’t forget and the compiler, of course, will keep you on the straight. You may worry about unfortunate implicit conversions, and then, yes, you should be explicit.
I’m not convinced all this griping about the size of C++ and all these “modern” features is not a bit of conservatism run amuck. Especially with the new memory features which allow us to avoid bare pointers.
Sorry for my wall of text. Just a long time C++ fan here who’s a big fan of the new standard.
A side note on the lego pictures
The new C++ features are execllent. The problem is the legacy of bad defaults and backward compatibility at all costs. This is probably the right decision overall, but it makes for an uglier language. I would also point out that you do pay for features that you do not use. Why? Well it might be used by code that you call, triggering weird behavour (e.g. const casts) and it makes tooling considerably hard to build (see Clang’s code complexity).
I have three comments:
I think it’s natural for any language to eventually buckle under the weight of Amdahl’s law, which in this case is making common idioms faster to write and recognize.
The author seems to simultaneously criticize modern C++ for having too many ways to do something (as in the vector insert example), but also criticizes modern C++ for attempting to make idioms that guide you towards a few ways to do something. In that sense, I’m not sure which he’s arguing for/against.
I agree having too many syntactical features/sugars can create cognitive overhead (one of my pain points with learning Haskell), and I assume that’s why the community has rallied behind guidelines like: https://github.com/isocpp/CppCoreGuidelines
#pragma optimize or function attributes
- There are scoping blocks where you can tell the compiler not to optimize things. e.g. #pragma optimize or function attributes
Though note that as of 7.1, GCC’s documentation describes __attribute__((optimize(...))) as “[to] be used for debugging purposes only…not suitable in production code” (and given that the corresponding pragma is described in terms of the attribute, the same would presumably apply to it as well).
I really enjoy these articles about networked games from a time when both machine and network wer quite restricted. Thanks @nickpsecurity.
I loved the left field solution they came up with - run the game simulation on all the computers with same inputs - and how they handled differing computer performance.
–
Cheating to reveal information locally was still possible, but these few leaks were relatively easy to secure in subsequent patches and revisions.
This part I’m not convinced about, but possibly this was true because the user’s computer was so saturated. I bet with a powerful enough machine the user could peek into the running state of the simulation and expose internal details of the other player, basically lifting the fog of war.
–
A deer slightly out of alignment when the random map was created would forage slightly differently – and minutes later a villager would path a tiny bit off, or miss with his spear and take home no meat.
Also known as the butterfly effect (No, not the movie). But I can’t understand how this would happen with identical initial conditions unless we are talking different rounding errors. I wonder if you could use intermittent (expensive) sync frames to sync pairs of simulations during lulls in the action.
Another article along these lines that I like relates to Descent .. Oh No! That site has gone away :(. Here is an archived copy
It was extra interesting to me given I had a 28.8Kbps line with AOLHell. Enjoyed every song I got off that line given how long they took.
Far as cheating, I do know many games keep global state on server instead of client to avoid that kind of cheating. I know WoW particularly had boys that pesk into memory. I found them when designing a semi-automated, goldfarming operation. Never did it: just toying with how it would work technically, financially, etc.
I used to love Descent. Thanks for article. I’ll read it tonight.
@anishathalye did you run into Beamer by any chance? It was the hot thing a decade ago I think. There is an extension for it for posters. I see that you have :)
Yeah, beamer / beamerposter is awesome! I just didn’t like the way the default themes / existing third-party themes looked, so I made my own.
I use my laptop keyboard (13” macbook). I’m probably not as intensive a hacker as the rest of you because that’s what I’ve used for over a decade now (just have changed macs). I tried out “DasKeyboard” and found it annoyingly loud. Plus, the temptation to use it to whack someone over a “tabs/spaces” debate would be dangerously high. (edit I see @alexkorban and I make a team :) )
You must have WRISTS OF STEEL.
Those Apple laptop keyboards, at least the newer ones, are the squishiest awfullest (IMO :) key feel EVER in the history of keyboards.
Only keyboard that eclipses them is the membrane keyboard of the Atari 400 (Which I blame for giving me the propensity to POUND THE FRACK out of the keys :)
Folks, I recall reading somewhere that OCaml has something like a GIL and threaded/paralleled apps are a bit of an issue. Is this still true? Thanks!
Yes there is a GIL and parallelism is currently doable with c bindigs as in python, but there is ongoing work for multicore (and algebraic effects) support: https://discuss.ocaml.org/t/ocaml-multicore-report-on-a-june-2018-development-meeting-in-paris/2202
The work is still ongoing, and went through some iterations in the past few years, but the first bits should start landing by the end of the year
That’s really great to hear, that things will be moving along in such short order. Do you have any good introductions to the topic of algebraic effects, beyond what I could get by searching on my own?
I would recommend http://okmij.org/ftp/Haskell/extensible/index.html
And for more specifically ocaml:
Although Ocaml ignored multicore a long time, the Standard ML community keeps doing interesting projects in that space. MultiMLton is one example.
Although GIL seems like a huge limitation, I’ve been actually quite impressed by how infime it was for most tasks (except if you do require huge throughput).
As one insignificant user of this language, please stop adding these tiny edge case syntax variations and do something about performance. But I am one small insignificant user …
This is exactly the attitude that leads to maintainers’ burn outs.
Do realize this:
(None of this is aimed at you personally, I don’t know who you are. I’m dissecting an attitude that you’ve voiced, it’s just all too common.)
Python is not a product, and you’re not a paying customer, you don’t get to say “do this instead of that” because none of the volunteer maintainers owes you to produce a language for you. Just walking by and telling people what to do with their project is at the very least impolite.
I agree with the general direction of your post, but Python is a product and it is marketed to people, through the foundation and advocacy. It’s not a commercial product (though, given the widespread industry usage, you could argue it somewhat is). It’s reasonable of users to form expectations.
Where it goes wrong is when individual users claim that this also means that they need to be consulted or their consultation will steer the project to the better. http://www.ftrain.com/wwic.html has an interesting investigation of that.
Where it goes wrong is when users claim that this also means that they need to be consulted or their consultation will steer the project to the better.
Wait, who is the product being built for, if not the user? You can say I am not a significant user, so my opinion is not important, as opposed to say Google which drove Python development for a while before they focused on other things, but as a collective, users’ opinions should matter. Otherwise, it’s just a hobby.
Sorry, I clarified the post: “individual users”. There must be a consultation process and some way of participation. RFCs or PEPs provide that.
Yet, what we regularly see is people claiming how the product would be a better place if we listened to them (that, one person we never met). Or, alternatively, people that just don’t want to accept a loss in a long-running debate.
I don’t know if that helps clarifying, it’s a topic for huge articles.
I often find what people end up focusing on - like this PEP - is bike shedding. It’s what folks can have an opinion on after not enough sleep and a zillion other things to do and not enough in depth knowledge. Heck I could have an opinion on it. As opposed to hard problems like performance where I would not know where to start, much less contribute any code, but which would actually help me and, I suspect, many other folks, who are with some sighing, migrating their code to Julia, or, like me, gnashing their teeth at the ugliness of Cython.
Yeah, it’s that kind of thing. I take a harsh, but well-structured opinion any time and those people are extremely important. What annoys me is people following a tweet-sized mantra to the end, very much showing along the path that they have not looked at what is all involved or who would benefit or not knowing when to let go off a debate.
Adding syntax variations is not done at the expense of performance, different volunteers are working on what’s more interesting to them.
Regrettably, a lot of languages and ecosystems suffer greatly from the incoherence that this sort of permissive attitude creates.
Software is just as much about what gets left out as what gets put in, and just because Jane Smith and John Doe have a pet feature they are excited about doesn’t mean they should automatically be embraced when there are more important things on fire.
the incoherence that this sort of permissive attitude creates
The Haskell community would’ve just thrown PEP 572 behind {-# LANGUAGE Colonoscopy #-} and been done with it.
Sure, this doesn’t get us out of jail free with regard to incoherence, but it kicks down the problem from the language to the projects that choose to opt-in.
I find it hard to see this as a good thing. For me, it mostly highlights why Haskell is a one-implementation language… er, 2 ^ 227 languages, if ghc --supported-extensions | wc -l is to be taken literally. Of course, some of those extensions are much more popular than others, but it really slows down someone trying to learn “real world” Haskell by reading library code.
Of course, some of those extensions are much more popular than others
Yeah, this is a pretty interesting question! I threw some plots together that might help explore it, but it’s not super conclusive. As with most things here, I think a lot of this boils down to personal preference. Have a look:
https://gist.github.com/atondwal/ee869b951b5cf9b6653f7deda0b7dbd8
Yes. Exactly this. One of the things I value about Python is its syntactic clarity. It is the most decidedly un-clever programming language I’ve yet to encounter.
It is that way at the expense of performance, syntactic compactness, and probably some powerful features that could make me levitate and fly through the air unaided if I learned them, but I build infrastructure and day in, day out, Python gets me there secure in the knowledge that I can pick up anyone’s code and at the VERY LEAST understand what the language is doing 99% of the time.
I find that “people working on what interests them” as opposed to taking a systematic survey of what use cases are most needed and prioritizing those is a hard problem in software projects, and I find it curious that people think this is not a problem to be solved for open source projects that are not single writer/single user hobby projects.
Python is interesting because it forms core infrastructure for many companies, so presumably they would be working on issues related to real use cases. Projects like numpy and Cython are examples of how people see an important need (performance) and go outside the official language to get something done.
“If you want something to happen in an open source project, volunteer to do it.” is also one of those hostile attitudes that I find curious. In a company with a paid product of course that attitude won’t fly, but I suspect that if an open source project had that attitude as a default, it would gradually lose users to a more responsive one.
As an example, I want to use this response from a library author as an example of a positive response that I value. This is a library I use often for a hobby. I raised an issue and the author put it in the backlog after understanding the use case. They may not get to it immediately. They may not get to it ever based on prioritization, but they listened and put it on the list.
Oddly enough, I see this kind of decent behavior more in the smaller projects (where I would not expect it) than in the larger ones. I think the larger ones with multiple vendors contributing turn into a “pay to play” situation. I don’t know if this is the ideal of open source, but it is an understandable outcome. I do wish the hostility would decrease though.
Performance has never been a priority for Python and this probably won’t change, because as you said, there are alternatives if you want Python’s syntax with performance. Also its interoperability with C is okeish and that means that the small niche of Python’s users that use it for performance critical operations that are not already supported by Numpy, Numba and so on, will always be free to go that extra mile to optimize their code without much trouble compared to stuff like JNI.
If you want raw performance, stick to C/C++ or Rust.
I also observe the same tendency of smaller projects being more responsive, but I think the issue is scale, not “pay to play”. Big projects get so much more issue reports but their “customer services” are not proportionally large, so I think big projects actually have less resource per issue.
please stop adding these tiny edge case syntax variations and do something about performance.
There’s a better forum, and approach, to raise this point.
I guess you are saying my grass roots campaign to displace “Should Python have :=” with “gradual typing leading to improved performance” as a higher priority in the Python world is failing here. I guess you are right :)
Have you tried Pypy? Have you tried running your code through Cython?
Have you read any of the zillion and one articles on improving your Python’s performance?
If the answer to any of these is “no” then IMO you lose the right to kvetch about Python’s performance.
And if Python really isn’t performant enough for you, why not use a language that’s closer to the metal like Rust or Go or C/C++?
Yes to all of the above. But not understanding where all the personal hostility is coming from. Apparently having the opinion that “Should := be part of Python” is much less important than “Let’s put our energies towards getting rid of the GIL and creating a kickass implementation that rivals C++” raises hackles. I am amused, entertained but still puzzled at all the energy.
There was annoyance in my tone, and that’s because I’m a Python fan, and listening to people kvetch endlessly about how Python should be something it isn’t gets Ooooold when you’ve been listening to it for year upon year.
I’d argue that in order to achieve perf that rivals C++ Python would need to become something it’s not. I’d argue that if you need C++ perf you should use C++ or better Rust. Python operates at a very high level of abstraction which incurs some performance penalties. Full stop.
This is an interesting, and puzzling, attitude.
One of the fun things about Cython was watching how the C++ code generated approaches “bare metal” as you add more and more type hints. Not clear at all to me why Python can not become something like Typed Racket, or LISP with types (I forget what that is called) that elegantly sheds dynamism and gets closer to the metal the more type information it gets.
Haskell is a high level language that compiles down to very efficient code (barring laziness and thunks and so on).
Yes, I find this defense of the slowness of Python (not just you but by all commentators here) and the articulation that I, as one simple, humble user, should just shut up and go away kind of interesting.
I suspect that it is a biased sample, based on who visits this post after seeing the words “Guido van Rossum”
My hypothesis is that people who want performance are minority among Python users. I contributed to both PyPy and Pyston. Most Python users don’t seem interested about either.
For me that has been the most insightful comment here. I guess the vast majority of users employ it as glue code for fast components, or many other things that don’t need performance. Thanks for working on pypy. Pyston I never checked out.
Not clear at all to me why Python can not become something like Typed Racket, or LISP with types (I forget what that is called) that elegantly sheds dynamism and gets closer to the metal the more type information it gets.
Isn’t that what mypy is attempting to do? I’ve not been following Python for years now, so really have no horse in this race. However, I will say that the number of people, and domains represented in the Python community is staggering. Evolving the language, while keeping everyone happy enough to continue investing in it is a pretty amazing endeavor.
I’ll also point out that Python has a process for suggesting improvements, and many of the core contributors are approachable. You might be better off expressing your (valid as far as I can see) concerns with them, but you might also approach this (if you care deeply about it) by taking on some of the work to improve performance yourself. There’s no better way to convince people that an idea is good, or valid than to show them results.
Not really. Mypy’s goal is to promote type safety as a way to increase program correctness and reduce complexity in large systems.
It doesn’t benefit performance at all near as I can tell, at least not in its current incarnation.
Cython DOES in fact do this, but the types you hint with there are C types.
Ah, I thought maybe MyPy actually could do some transformation of the code, based on it’s understanding, but it appears to describe itself as a “linter on steroids,” implying that it only looks at your code in a separate phase before you run it.
Typed Racket has some ability to optimize code, but it’s not nearly as sophisticated as other statically typed languages.
Be aware that even Typed Racket still has performance and usability issues in certain use cases. The larger your codebase, the large the chance you will run into them. The ultimate viability of gradual typing is still an open question.
In no way did I imply that you should “shut up and go away”.
What I want is for people who make comments about Python’s speed to be aware of the alternatives, understand the trade-offs, and generally be mindful of what they’re asking for.
I may have made some false assumptions in your case, and for that I apologize. I should have known that this community generally attracts people who have more going on than is the norm (and the norm is unthinking end users posting WHY MY CODE SO SLOW?
Hey, no problem! I’m just amused at the whole tone of this set of threads set by the original response (not yours) to my comment, lecturing me on a variety of things. I had no idea that (and can’t fathom why) my brief comment regarding prioritization decisions of a project would be taken so personally and raise so much bile. What I’m saying is also not so controversial - big public projects have a tendency to veer into big arguments over little details while huge gaps in use cases remain. I saw this particular PEP argument as a hilarious illustration of this phenomenon in how Python is being run.
Thinking about this a little more - sometimes, when languages ‘evolve’ I feel like they forget themselves. What makes this language compelling for vast numbers of programmers? What’s the appeal?
In Python’s case, there are several, but two for sure are a super shallow learning curve, and its tendency towards ‘un-clever’ syntax.
I worry that by morphong into something else that’s more to your liking for performance reasons, those first two tenets will get lost in the shuffle, and Python will lose its appeal for the vast majority of us who are just fine with Python’s speed as is.
Yes, though we must also remember that as users of Python, invested in it as a user interface for our code ideas, we are resistant to any change. Languages may lose themselves, but changes are sometimes hugely for the better. And it can be hard to predict.
In Python’s 2.x period, what we now consider key features of the language, like list comprehensions and generator expressions and generators, were “evolved” over a base language that lacked those features altogether, and conservatives in the community were doubtful they’d get much use or have much positive impact on code. Likewise for the class/type system “unification” before that. Python has had a remarkable evolutionary approach over its long 3-decade life, and will continue to do so even post-GvR. That may be his true legacy.
Heh. I think this is an example of the Lobste.rs rating system working as it should :) I posted an immoderate comment borne of an emotional response to a perfectly reasonable reply, and end up with a +1: +4 -2 troll, -1 incorrrect :)
Author here. Let me know if something isn’t working well for you or if you’d like to see different recommendations. The demo was a lot of fun to build, and I am always looking for feedback!
I appreciate you writing the code, and I realize this only uses publicly available information, but it makes me minorly queasy.
(To emphasize, in the following text, I’m not really criticizing you, or this application specifically, just voicing my fears of what technology allows us to do)
This falls into that class of things where computers make it so much easier to violate privacy in spirit, if not law. If I were to do this by hand, it would be very tedious and time consuming, so I would probably not do it. But now, very easily I can develop a profile of sorts of a person - a stranger - without them knowing, or perhaps even wanting it.
Again, totally legal, but in my opinion in a gray zone like a surveillance state. Sure license plates are public information, but in the old days the KGB would have to physically tail you to figure out what you were upto, and they can’t tail all the dissidents.
But now, every police officer can collate information from license plate readers and track your movements and build a profile of you with minimal resources and oversight. Not technically a violation of privacy laws, but it sure as hell should be a violation of something.
I don’t know. This reminds me of the practice of putting everything into header files (“to inline everything, for performance”) which leads to longer compile times in practice because more things have to be recompiled.
I don’t get the issue with the traditional split of header and source files. There is a one time cost and then recompiles are faster.
The proposed method sounds like compilation or (worse) runtime issues for little gain.
I think the problem is that in C++ you end up having lots of code in header files, such as class members and templates. Then when you have dependencies between different source files and of course on the system headers, you end up including recursively lots of headers. That means a change in a single header file causes a cascading recompile which pretty much eliminates the benefits of incremental builds. Of course you have some crazy rules about includes (it’s in C though) to try to alleviate the problem.
I have to admit I don’t have firsthand experience in maintaining unity builds but seems like it’s worth it in larger projects.
(I think this is a cool development - lobste.rs here is not linking to an article, but serving as a forum where people write articles itself)
@JohnCarter, could you elaborate a bit please: what other things in a constructor do you consider making it too big?
Are there global side effects? Like passing in references/pointers to other objects and the constructor is mutating them? That indeed sounds like spaghetti code.
Are you talking about just the size of the code? I can think of cases where a large class, composed of many smaller classes, would end up doing a bunch of initialization, and so you would have a large constructor.
In the little work that I’ve done, I’ve never needed particularly verbose constructors, especially since I try to rely on default constructors of objects that compose the enclosing class.
To speak to your note about exceptions, I’ve been taught that if your object can fail on construction then you should be supplying a
constructfunction that raises the exception or returns an invalid object (e.g. throughstd::optional)