1. 12

If you want to check out a practical gradually-typed language, I’ve been using Typed Racket.

It’s very convenient to use untyped code early-on when the design of the program is unclear(or when porting code from a different language), and to switch individual modules to typed code later to reduce bugs.

1. 4

Another great gradually typed language is Perl6. It has a Cool type, a value that is simultaneously a string and a number, which I think is pretty… cool!

1. 1

Basically how every string / number in perl5 work{s|ed}?

1. 2

Based on reading https://docs.perl6.org/type/Cool, kinda? Although it also looks to me as if this is at once broader than what Perl 5 does (e.g. 123.substr(1, 2), or how Array is also a Cool type) and also a bit more formal, typing-wise, since each of those invocations makes clear that it needs a Cool in its Numeric or String form, for example.

1. 1

That makes sense that it changed. perl5 is not so.. structured. But this stuff worked:

"4" + "6.2"
$q=42; print "foo$q"
print "foo" + $q  It makes things like numeric fields in HTML forms very easy (if$form["age"] <= 16), but the subtle bugs you get…

Anyway. That was perl5. The perl6 solution seems to make things much more explicit.

2. 3

stanza is another interesting language that is designed from the start to be gradually typed.

1. 2

Typed Racket is indeed an awesome example. I believe TypeScript would also qualify very well here (as might Flow; I’m not as familiar with it). This also reminds me of Dylan of yore, too: https://en.wikipedia.org/wiki/Dylan_(programming_language)

1. 1

Is this the same thing? I had the same thought and I wasn’t sure if it was.

1. 4

Yes, Typed Racket is gradual typing, but for example, the current version of Typed Clojure is not. The premise is that gradual typing must support being used by dynamic typing, to simplify a little bit.

1. 1

nice to see aurora get a shout out - it was by far my favourite dos-era text editor, and i’m really sad it died the way it did.

1. 2

that’s pretty exciting! i was just this weekend looking at the oz/mozart home page, wondering if the language were dead or not (i wanted to use it for a small project, just to play with it, but concluded that it did indeed look unmaintained)

1. 3

I use Mozart/Oz in projects and help keep the bitrot away from the 1.3.x and 1.4.x versions. The Mozart 2 version runs but is lacking features - distributed objects and constraints being the most notable. I feel like there needs to be a “1.5” interim release, maintained and with binaries, to show activity.

The project tends to suffer from being used as a university/research project. People do work on masters thesis to contribute a feature to it, then they disappear and that feature doesn’t get maintained, extended, made production ready, etc.

That said, it’s still a great system to use.

1. 2

i should have known that if anyone was using it you would be :) do you have a more recent version of 1.4.x than the one in the repo, or have you been pushing all your changes?

1. 2

I’ve pushed all my changes so it should build on Linux at least. I recommend using the 1.3.x branch as 1.4.x has some annoying bugs. Distributed search is broken, distribution statistics don’t work, due to a switch to a C++ based library to do object distribution and bugs didn’t get ironed out. It only matters if you plan to use those features though. I’ve backported some of the actual bug fixes from 1.4.x to 1.3.x.

2. 1

What do you use it for?

1. 3

Originally I wrote a bitcoin mining pool using Mozart/Oz, before transitioning it to ATS. Current usage is for deployment and management of a couple of servers. It uses the distribution features, and constraints solving, to work out what to install, uninstall, etc based on changes to configuration files. It has a user interface using the Roads web framework. It’s a toy system I built to explore ideas as an ansible alternative. I’ve done various incarnations in Prolog and Mozart/Oz.

What might interest you is some old articles on using Mozart/Oz for proving things. See “A Program Verification System Based on Oz”, “Compiling Formal Specifications to Oz Programs” and “Deriving Acceptance Tests from Goal Requirements” in Multiparadigm programming in Mozart/Oz.

1. 1

I saved it for when I have Springer access. I am interested in it as a multiparadigm language as well. Did you find the constraint solving to be as good as an industrial solver integrated with a good 3GL? Or still better for performance or usability to just go with dedicated tools? I know a lot of people liked that Prolog could do some forms of parsing or solving but language made it harder to use some better methods. I figured something similar could happen with Mozart/Oz trying to do too many paradigms at once.

1. 4

The constraint solver in Mozart/Oz has many interesting features, but in the end it is, IMHO, just too old to be competitive with a modern solver.

For constraint solving, I would probably use Gecode, or_tools, or Choco depending on the particular use-case one has and the technical requirements. If money is not an issue, IBM CP Optimizer seems to be very very good.

To explore a particular problem, I typically write the model in MiniZinc since it is a reasonably high level modelling language that allows me the switch between solving backends. In particular, I like that I can try out a problem with both a normal solver (such as Gecode) and a lazy clause generation solver such as Chuffed.

Of course, the particular problem might be better suited for SMT solving (using Z3), MIP solvers (CPLEX or Gurobi), or perhaps a custom algorithm.

Another thing to consider is correctness. Optimization systems such as constraint solvers are a complex pieces of software, with lots of possibilities for bugs. Off-by-one errors in particular are very common. If I were to depend on an optimization system, I would prefer one that is maintained, has been around for a while, and that has a reasonably large set of automated tests.

1. 1

Thanks for the great summary!

On related note, if you’re into CHR, I just submitted a paper that shows how they compile it to C language. MiniZinc and CHR were most interesting languages I found looking for basic info on constraint handling.

2. 2

I haven’t used an industrial solver in anger - other than using Z3’s integration with ATS. But the Mozart solver seems to work well, and has features to distribute amongst multiple machines for larger problems. It has tools to visualize and explore the solution space and the ability to customize and add features to the solver in Oz. It’s a pretty old system though and I know that Mozart 2 intends to replace the Oz solver with gecode to get some newer features.

The “too many paradigms” is an issue in that it can be hard to decide how to approach a problem. Do you use OO, functional, relational, etc. So many choices that it can be paralysing.

1. 1

Finally, a DSL that’s trying to solve a real problem.

1. 2

isn’t that most DSLs?

1. 1

Another one you mean? There are other ones.

1. 1

Oh, I didn’t know about those!

1. 2

Thing to remember is there’s several kinds of DSL’s. One kind that will often be unnecessary or questionable is an external DSL designed to aid a language that doesnt itself have DSL’s. These are like a combo of configuration files and libraries. However, the people using languages like LISP or Red designed to do DSL’s as easily as libraries will have a lot of useful ones.

The benefit of a good, embedded DSL is that it just lets you express the solution more easily. Most libraries you find useful could probably be turned into DSL’s. It’s just a matter of whether a shift in language style is justified. GUI, web, and database programming probably benefited most from syntax/style changes DSL’s give, including 4GL’s that are DSL-like.

1. 20

What about Sophie Wilson who co-designed the ARM processor and wrote BBC BASIC?

1. 1

1. 1

was my first thought too, but then, I grew up with a BBC B (:

1. 1

bravo! for clarification, when they say

Each pair of samples from the sound file interpreted as 2D point.

do they mean simply the intensities of two consecutive pairs of scalars, x & y being left and right channels?

1. 1

yes, the channels are connected to x and y respectively. this page has more detail on how oscilloscope visualisations for music work.

1. 3

I unapologetically catch up on email during ‘status update’ meetings where my actual part is only a small fraction of the time. I have tried to use my laptop less during more substantive ones.

1. 6

Those meetings sound like the kind that can be entirely replaced by email.

1. 3

they can, but they are often not (:

2. 3

As long as those status update meetings are regularly scheduled and efficiently run, they’re probably OK. That falls under the “weekly cadence calls” category in the article. My org back then had these that had 10-15 people on the call giving 2 minute updates on the progress of several teams comprised of probably 60 people. By the time I left, we’d gotten pretty efficient at passively listening to people basically give standup-like updates: here’s what we did last week, here’s what we’re doing this week, here’s what we’re blocked on so someone please take an action item to unblock us. When someone interjected during an update, we could keep discussion to a minute or two and take things offline. We always designated someone as the notetaker/action-item-assigner who would keep minutes and distribute them.

Nowadays, I use my one existing weekly – importantly: for which I am remote – as a way of working on mindless, kata-like tasks: IDE-suggested refactoring, build system refactoring, etc. That way, I’m still listening and am able to interject, but can use the time to work on some light technical debt.

1. 3

“That falls under the “weekly cadence calls” category in the article. My org back then had these that had 10-15 people on the call giving 2 minute updates on the progress of several teams comprised of probably 60 people. “

That’s something like I’m talking about in another comment under useless meetings category. Like adsouza said, that sounds like an email could replace the whole meeting with much time saved. Alternatively, a wiki, a forum, or something similar. People could meet in chat or in person on those few things you have discussion on. In one organization we worked, we did these cadence calls on a regular basis until we lost the manager that was interested in doing them. It took a while to fill that position. Our productivity went up on those days because the time listening to people drone on about stuff that doesn’t affect us was time we were not working on our end of the company.

“as a way of working on mindless, kata-like tasks: IDE-suggested refactoring, build system refactoring, etc. That way, I’m still listening and am able to interject, but can use the time to work on some light technical debt.”

If in such meetings, I think that’s a great idea on making the time useful. Especially how you’re specifically doing stuff so “mindless” that you probably won’t miss anything important in the meeting. I’d guess it also lets your mind kick into gear better on important stuff later.

1. 1

ooh, autorequiring pp and the case equality for any etc. in particular are really great examples of small features that will add up to a lot of developer pleasantness over time.

1. 9

Even after wasm becomes first-class in browsers, on the same level of javascript, why write UI code in systems programming language without GC? Considering that actual UI is DOM elements, only “glue code” is written in Rust.

Anyway, exploring this possibility is cool.

BTW, Rust might be useful in actual desktop GUIs because of fast startup (unlike JVM and languages that compile sources on program start), better interop with C and controllable memory consumption. React-like library might be cool on desktop too.

1. 6

i’ve written some elm-inspired desktop gui code in ocaml+gtk; it’s a very pleasant paradigm to program in. you don’t really need a framework, the gui library provides an event loop for you.

1. 1

The ocaml+gtk code sounds interesting, do you have that online by chance?

1. 5

model and controller

gtkgui view

web view

it worked out pretty nicely, writing a web view as well helped a lot in factoring the gui code properly, and now i get to use the gtk frontend to prototype features for the web frontend (i expect the web frontend to be the one i actually “ship”, but writing a desktop app is a lot easier)

1. 1

1. 3

Finally got around to reading Null States, the second book in Malka Older’s excellent “Centenal” trilogy. It’s one of those books I was anticipating very eagerly, but somehow did not get around to reading when it came out in September. Enjoying it a lot. Andy Weir’s “Artemis” queued up next.

If you’re looking for classic sf, I would recommend short stories over novels. Clarke’s “Of Time and Stars” is a superb single-author collection, for instance, as is Heinlein’s “The Past Through Tomorrow”. Anything by Groff Conklin if you want well-put-together anthologies, also the “Spectrum” series by Amis and Conquest.

If you prefer novel length, check out John Brunner (“The Shockwave Rider” is probably his most accessible book, though “Stand on Zanzibar” is better), Larry Niven (“Protector” or “Ringworld”), Harry Harrison (the “To The Stars” trilogy is great), or for older stuff, Clarke again (“The Fountains of Paradise”, “The City and the Stars”; he’s probably my favourite of that generation of writers), Andre Norton (don’t like everything she’s done, but “Star Soldiers” is superb), Asimov (a lot of his novels feel a bit dated, but “Foundation” is still a great read). Not really a fan of Heinlein’s novel-length stuff any more, though some of them are probably still worth a read (“Citizen of the Galaxy”, “Tunnel in the Sky”, “The Moon is a Harsh Mistress”, “Starship Troopers” and “The Door Into Summer” are probably the ones that hold up best today, if you want to explore his work.)

1. 5

ocaml for me, though I still turn to ruby when I just need to code something up fast, or want to use code to explore something.

1. 4

Sure glad I added the OCaml option :D Must admit, never really looked at it - probably I should :)

1. 1

I came to OCaml via Clojure and before that Python. So not exactly from Ruby but close enough.

1. 4

one of my more embarrassing moments on reddit was when i commented on a video saying “nice material, shame the presenter had such an annoying manner”, and it turned out that OP was the presenter.

1. 11

TeX has something that I miss in all* other markup languages, from lightweight to heavyweight: macros. I don’t even need logic or anything hairy – I just want to define

\def\bug#number{http://example.com/mybugtracker/#number}


, so I can write \bug{442} to create a properly marked-up link to a bug.

* Honourable exception: MediaWiki.

1. 4

other typesetting languages in the same space, like scribble or lout do this too

1. 1

Yay, counterexamples! Thank you very much, I’ll check them out.

2. 4

Honourable exception: MediaWiki.

Thanks for reminding me I still know way too much about MediaWiki templating and parser function.

1. 2

It’s been a long time since I read up on SGML but I believe there’s a form of macro capabilities in the DTD.

1. 1

I’m not sure if it qualifies as a “markup” language for your purposes as it’s closer to typesetting than markup, but troff also knows macros. In fact, using troff without macros is a fairly painful experience.

1. 16

I seem to be the only person who moved from git to mercurial and liked git a lot more.

Here were my impressions:

• The index (“staging area”) is an awesome feature. I know it trips people up occasionally, but the ability to carefully and precisely stage some of the changes in the working tree is invaluable.
• I like git’s branch model better than Mercurial’s. It’s more flexible and, at least for me, more intuitive. Maybe bookmarks are equivalent, but we didn’t use them at my work, so I couldn’t say.
• I like git’s general acceptance of history rewriting. It’s a valuable feature without which you need to choose between committing very carefully or breaking bisect and having a useless history. If you’re worried about corrupting your history, back up your repository (which you should do anyway, history rewriting is not the only way to corrupt a repository). I know it’s possible in Mercurial, but it’s not a first-class citizen and the culture disapproves.
• Mercurial’s commandline interface is a bit more uniform and predictable, but to be honest, this feels like a very weak benefit. If you’re using it professionally, muscle memory should basically mitigate the difference within at most a few weeks. Also, Mercurial’s interface certainly isn’t perfect, e.g. -l for commit limiting in hg log is wrong (it should be -n).
1. 12

index (“staging area”) is an awesome feature.

hg commit --interactive or hg ci -i is essentially the same thing. What people generally like about the index/cache/staging-area (these were all official synonyms at one point) is the ability to selectively pick apart commits, not so much having an extra, semi-obligatory step between your code and the creation of a commit. With Mercurial’s curses interface, hg ci -i is pleasant and widely loved.

If you really want a cache/index/staging-area, you can use hg ci -i --secret to start a secret staging commit and hg amend --interactive or hg am -i to keep adding to it. The --secret part is just to make sure it doesn’t get pushed accidentally. Once you’re ready to show the world your commit, hg phase --draft will turn it into a draft that you can share.

I like git’s general acceptance of history rewriting.

hg has this too, but better, and not as well-advertised: there’s a meta-history of what is being rewritten. This is called changeset evolution. The basic idea is that there are obsolescence markers that indicate which commit replace which other commit, so that history editing can be propagated in a safe, distributed fashion. No more need to force-push everything and then tell everyone who may have been following along to reset branches. Just push and pull and hg evolve will figure out where everything should be because it has more context.

Mercurial’s commandline interface is a bit more uniform and predictable, but to be honest, this feels like a very weak benefit.

My father, may he rest in peace, used to say that people can get used to everything except hunger. Sure, you can get used to git’s widely-derided UI. But why should you have to? If we can come up with a better UI, we should. Indeed, alternative git interfaces are their own cottage industry. Don’t downplay the problems that the git UI cause.

1. 9

hg commit --interactive or hg ci -i is essentially the same thing.

No it’s not. I can (and frequently do) continue to mess with the working tree after adding changes to the index.

hg has this too

General, ecosystem-wide acceptance of history rewriting? Not in my experience. When I was using it, the general guidance seemed to be “don’t do that, but if you’re going to do it anyway, here’s how”, whereas guidance for git is more along the lines of “here are some things to be aware of when you do that, now have fun”. No remote solution I used would permit pushing rewritten history.

The very page you linked to says that evolve is incomplete.

To be clear, the workflow I’m looking to be enabled here is to commit and push extremely frequently—as often as I save, ideally—and then, once the feature is complete enough for review, squash all those mostly-useless commits together into a single or few meaningful ones. This lets me get both commits willy-nilly and meaningful, bisect-able history.

But why should you have to?

Because it took me like three days? “Not ideal” is a far cry from “compellingly bad”. I will certainly agree that git’s interface is not ideal, but Mercurial’s is not better enough to be a compelling feature on its own. (Also, I had to get used to Mercurial’s interface when I started a job that used it. Why did I have to?)

1. 9

No it’s not. I can (and frequently do) continue to mess with the working tree after adding changes to the index.

Have you tried hg ci -i? Combined with hg am -i, it really is the same thing. Creating a commit is, pardon the pun, no commitment.

The very page you linked to says that evolve is incomplete.

It’s sadly in a state of perpetual beta, but so was Gmail for about five years and that didn’t stop a lot of people from using it from day one. Evolve is mostly feature-complete, just not completely polished to the usual hg standards.

To be clear, the workflow I’m looking to be enabled here is to commit and push extremely frequently—as often as I save, ideally—and then, once the feature is complete enough for review, squash all those mostly-useless commits together into a single or few meaningful ones. This lets me get both commits willy-nilly and meaningful, bisect-able history.

This is exactly what evolve is for. And people can even pull your commits before they’re squashed and if they pull again after you squashed, they won’t even see the difference in their workflow. Their copy of the commits will also be squashed without them needing to do an extra step like git reset and try to figure out exactly what to throw out.

2. 4

I agree that the staging area is an unnecessary concept. As a self experiment, I made myself git aliases to not use staging. After a few months of using that setup, I can say that I don’t miss staging.

(I don’t use Mercurial at all)

1. 3

IMHO hg ci -i is a way better interface than git staging area even with -i. Used both, Mercurial one is way better.

However I just recently found that there’s a mercurial equivalent of hg -i for git too as a plugin.

2. 4

what benefit does the staging area give you that cannot be replicated by just committing the changes you would have staged and then doing a rebase to squash all the ‘stage’ commits into one main commit? I use this workflow a lot and find the intermediate stages cleaner and easier to work with than the git staging area.

1. 2

The index feels cleaner and if I wander away in the middle and forget what I was doing, it’s much easier to unwind.

Obviously, there are any number of workflows which provide equivalent functionality, and I doubt anyone can muster more than extremely weak objective support or criticism of any. Having used both git and mercurial, I strongly prefer having the index to not having it. If people push me on it, I’m sure I can rationalize that preference until the cows come home, but I’m not sure how constructive that would be.

1. 7

lisp + java = Clojure

1. 10

erlang + compsci = elixir? programming + distributed = erlang? java + science = scala? javascript - browsers = nodejs?

There are currently 798 results for “clojure”. elixir for comparison has 313, elm has 233.

1. 4

clojure is fairly unique and differentiated.

1. 2
1. 4

1. 3

Thanks, that’s exactly what I was looking for. Sign in walls really discourage me.

Eta/Java reminds me of PureScript/JavaScript.

1. 2

That’s a different link, it’s not the interactive tour.

1. 2

right, but it lets you see what eta is at least (a fork of ghc that compiles haskell to jvm). the tour page itself doesn’t have a back link to the main page.

1. 15

I don’t even know how to go offline. I turned on airplane mode, but nothing happened. Do I actually need to be on an airplane?

1. 7

Apparently this uses https://developer.mozilla.org/en-US/docs/Web/API/NavigatorOnLine/onLine

Browsers implement this property differently.

1. 3

If you bring up the JavaScript console, it will prompt you through faking it.

1. 24

Sigh. I remember when web pages were just web pages and not interactive text adventures. Something else that works great in offline mode: plain HTML. How amazing is that?

1. 47

Aren’t you the dude with the web site that requires people to register a different Certificate Authority?

1. 13

How you get the HTML is your problem, but once you have it, it just works. Even works in lynx. Right click, save link, read it later. Have you tried doing that with this page? If I actually do decide to go offline, and transfer the file via sneakernet, how well will that work?

1. 5

Your perception of what is and what isn’t the user’s fault is “interesting”.

1. 1

I don’t see “fault” used. Are you referring to @tedu placing the burden on authenticating his blog on the user? I actually don’t understand what this tangent has to do with this offline thing anyways. Does @tedu’s certificate setup somehow mean that web pages were once not plain HTML that worked offline?

2. 7

I don’t suppose tedu dictates the chosen security model of your browser. I can tell my browser to stop whining and just show the friggin page.

No, you do not need to trust the cert or the ca to download and decrypt the page.

3. 6

What strikes me most with your statement, is how this sounds a lot like “it was better before”. Obviously plain HTML works great in offline mode, but it doesn’t help in any way to make the point the author is making.

1. 4

this page’s entire raison d’etre is to be an interactive text adventure, so it seems a bit point-missing to complain that it’s not a web page just because it happens to run in a browser.

1. 11

The author wonders why we’re online all the time. Well, how else am I to complete interactive adventures telling me to go offline? It seems more people than not resorted to using the browser console to read the page (after going online to find help), so I’m not sure how much they learned about the experience of being offline either. I learned (again) that nothing works and the only way to survive to is ask for help online.

4. 2

return your tray table to an upright position

1. 1

Tests showed airplane mode works best on a B-2.

1. 1

typo in the title, should read “software deployment model”

1. 2

I’m going to be nitpicky:

Clojure is a dynamic language. No matter where you stand on the static vs. dynamic typing debate, knowing languages in both camps is important. Clojure has a kind of optional typing, but in essence it’s dynamic.

There is no such thing as a dynamic language. I’ll repeat: there is no such thing as a dynamic language. There are many axes along which a language can be dynamic, including:

• Scope
• Name binding
• Types
• Continuations

Dynamicity is a property of these concepts. You can have a language with dynamic scope and types. That doesn’t make it a “dynamic language.” The first has a technical meaning, the other is marketing speak.

The same should be said of “strong typing” which is an utter nonsense phrase.

1. 8

can you think of a single common use of “dynamic language” that does not refer to dynamic types? it’s a perfectly fine shorthand for “dynamically typed language”; the other properties you mention may be there in addition but they are not the motivation for the term

1. 1

Yes, for example I’ve heard the term used to refer both to Perl and Common Lisp’s use of dynamic scope.

Even if it’s correct by usage (which I do not think it is), it’s imprecise. In fact, even the thing it stands for is imprecise. “Dynamic types” aren’t types. They follow completely different rules and work completely differently.

There’s a lot of imprecision in the terms used by laypeople to discuss programming languages, and there’s no reason for it when perfectly good and precise terms exist.

1. 2

And dynamic scope isn’t scope either. So-called “dynamic scope” is just syntactic sugar for inserting and removing everything into and from a single table that exists in the global scope.

1. 2

Well, all scope is just syntactic sugar. A single global table that maintains a stack of past bindings and unwinds them automatically as you leave a binding scope seems pretty scope-like to me, in the sense of being a programming abstraction.

1. 1

No, scope is not syntactic sugar. You can’t reproduce multiple (lexical, there is no other kind) scopes in a uniscoped language. Just like you can’t reproduce multiple (static, there is no other kind) types in a unityped language.

1. 1

(I’m going to ignore the Church of Bob Harper newspeak and bite anyway.)

Do you have some specific sense in which you mean “reproduce”, beyond simply being able to program in a style where you have lexical variable name resolution, and can’t refer to variable names not in the lexical scope they’re in? Unless I’m missing what you mean, you can implement that notion of scope in a Lisp that doesn’t have it by using macros to statically resolve names. Several possible strategies for that, just like there are a number of strategies for implementing lexical scoping in a compiler. For example, you can name-mangle every scoped variable to a distinct global variable name, and then have a pile of macros statically resolve the bindings. Some older Algol-style languages’ compilers did almost exactly something like that, although not implemented as macros within the language.

Granted, this isn’t “safe” in the sense that someone with some effort can get around the scoping, but that’s also true of other languages that are conventionally said to have scopes, like C.

1. 1

beyond simply being able to program in a style where you have lexical variable name resolution, and can’t refer to variable names not in the lexical scope they’re in?

Well, that’s the whole point: to deem non-closed expressions meaningless outside of a context where their free variables are resolved.

For example, you can name-mangle every scoped variable to a distinct global variable name, and then have a pile of macros statically resolve the bindings.

Have fun implementing recursive procedures that way.

1. 1

Have fun implementing recursive procedures that way.

There is no problem with recursion if said global mangled variable is bound to stack of values :)

1. 1

So then you need a hardcoded stack abstraction, which is somehow immune to being wrongly manipulated by users. For instance, if the operation “push into a stack” uses a local variable, then the user can’t mess with that variable.

1. 1

Lisp provides you with facilities for generation of names that do not collide with user code. This is what hygiene is in context of macros.

But you are technically correct - every abstraction can be broken by sufficiently motivated user :)

2. 1

And if you don’t want to bother, there’s always the copout C uses: restrict recursive procedures to the top level. I think C is still normally considered to have scoped variables despite this limitation, though maybe some disagree, and it makes the implementation of scoping particularly simple.

1. 1

C doesn’t allow nested procedures so in that sense, yes, C restricts recursive procedures to the top level. And yes, C does have scoped variables. In this example:

int x; /* global x */
void foo()
{
int x; /* local x */
/* ... */
}


Each x variable will have a different address. Further more, the address of x inside of foo() can change from call to call. They are distinct variables—C does not “save” the previous version of the global variable x when foo() is called. It can’t—because foo() could call a function bar() that does reference the global x and things would get mighty confused otherwise.

1. 1

Yes, I agree with that. My point was that if you accept that C’s scoping arrangement really counts as (lexical) scoping (I’m unclear whether @pyon would), then it becomes easier to say that you can implement (lexical) scoping in a language lacking it, which was the bone of contention above— because the C approach to scoping is pretty straightforward to implement in terms of name-mangling the scoped variables into global variables and using some macros to statically resolve the nested bindings.

1. 24

it’s a truly impressive post, both for the technical achievement and for the sheer quality of the writeup. it was completely accessible even as someone who has never done anything gpu related.