In my mind, DragonflyBSD is performance focus done right. I’d be curious to see how much of the stuff learned from Dragonfly could be adapted to OpenBSD.
And vice versa.
It looks interesting.
How does it compare with Dhall? I was expecting to get a reason in the README since both come from the Haskell world and I assume Dhall is well known at that point.
I brought Dhall up at the meetup, but nobody seemed to know of it :(
Unfortunately, dhall doesnt do any type inference so row polymorphism would be a hell of a lot harder to implement.
That’s what I thought too, row polymorphism was the reason for Expresso to exist. It’s just not explicitly mentioning Dhall.
Related, I saw this in my timeline a while ago: https://twitter.com/shajra/status/1040107159722360834
This looks extremely interesting, especially considering I’m currently working on an implementation of a variant of nix for working with batch computation. It would be interesting to see what it looks like to define the derivations in Expresso.
The presenter mentioned the possibility of boilerplate being massively reduced due to removing the need for many type definitions, almost everything can be inferred by usage with the compiler screaming at inconsistencies.
Theres an alternative to subtyping called Row Polymorphism, I recently went to an interesting presentation about it. Expresso is the language that was used to introduce this to the lambdamtl meetup.
Right now it’s just a collection of notes on my phone and laptop that occasionally become articles that are sometimes published on my blog. But I’m starting a bullet journal and am considering moving from markdown to mandoc. That way I can keep an analog journal and synthesize notes and accomplishments into guides that can be easily used on my laptop or compiled to html for public viewing.
The reason for mandoc is because of all the tools in *bsd that allow you to search for info in man pages.
because of all the tools in *bsd that allow you to search for info in man pages.
Exactly this. I mean the reason I keep everything on disk as a text file (.md or whatever) is that it integrates seamlessly with the rest of the system. I can search with grep. I can track, synchronize, and roll back changes with version control.
Don’t even get me started on this issue.
The end “product” might look more neat or clean but good code should have the following characteristics:
For me at first glance, I couldn’t for sure tell you whether a python snippet is correct or not, let alone if I didn’t write it.
While with braces, it’s super straightforward (even when there are lots of them).
It might be me but a good syntax shouldn’t be just a “pretty” or “fun” one, it should be one easy to remember and to look at.
If you ask me, the same applies to static typing (another heated topic) but looking at code and having types guide me through whatever logic I’m inspecting/debugging is so pleasing and easy that the amount of comments needed (also thanks to good function and variable naming) decreases immensely.
Navigating a huge pile of var here and var there, it’s just hard to know whether that specific assignment is wanted or merely a mistake.
Might just be me, I don’t know.
To be fair, if the code is so complex it requires more than a glance, the braces alone won’t save you. Having coherent indentation without mixing spaces and tabs will help with that.
I think the argument for keeping curly braces is pretty much the same for semicolons – they are both a pointless hassle, completely unnecessary in a world with things like “color displays” and almost all editors being indentation-aware in some fashion.
What exactly is an indentation aware IDE? And isnt it a bad thing that a language requires more complex tools to use it properly?
I might be a bit curmudgeonly, but I never got what the problem with parens was…
Not really my point though. I think the simpler and more consistent the syntax, the easier it is to write tools around it. I’m thinking of stuff like paredit / parinfer.
How well does this work? If I copy a block of code from one function that’s three levels in and paste it into a function where it should be two levels in, will it be two or three levels deep? Because whenever I’ve done this in C the indentation is initially wrong and needs to be corrected.
If I paste code at the end of an if block, how is the editor to know whether it is part of the if block or comes after? If there is a brace or some other marker, it’s easy for me to put the cursor inside or outside.
Here’s some code:
if conditionA: code() if conditionB: morecode()
I would like to paste “somecode()” into this function. Where do I put the cursor to add within conditionA and where to add between A and B?
end of line 2 for A, beginning of line 3 for between A & B
beginning of line 3 for between A & B; must add indented newline as part of A to paste into body of A
You reindent to where you want it to be. The editor cannot know.
With braces your editor may be taught how to count braces and reindent properly (or call
indent), and then you could argue “braces win”. I understand from your comment this isn’t common, but I think I could set my NeoVim up to do this quite easily.
However, copying code around and reindenting is such a trivial thing to do, that if braces could win by it, it’d be by a very narrow margin. So narrow that personal æsthetics concerning cleanliness is more important to some of us.
For me at first glance, I couldn’t for sure tell you whether a python snippet is correct or not, let alone if I didn’t write it.
What happened to the idea that you could learn a programming language before expecting you would be able to understand code written in that language?
I never see Haskell criticised on the basis that Java programmers can’t understand it.
Have you considered that Python’s syntax isn’t difficult to learn and is very simple and easy once you’ve learnt it?
While with braces, it’s super straightforward (even when there are lots of them).
The fact that major security bugs have been created in curly brace languages due to indentation and braces being unaligned suggests to me that people look at the indentation and not the braces.
That last point is a bit of a strawman since I never stated that curly have inherently no bugs.
Also lots of projects are/were done in C-like languages, so it’s clear that those kind of bugs are bound to happen.
You claimed that not having indentation-based syntax makes understanding code easier. If that were true, there wouldn’t be issues like
if (x) goto fail; goto fail;
The repo mentions sharing with friends and commenting on their activities. Are you counting on doing this with activitypub? Especially considering the selfhosted part.
came here to say the same thing, great chance to add activitypub support to this app.
Mastodon author on implementing activitypub, probably the best starting point. https://blog.joinmastodon.org/2018/06/how-to-implement-a-basic-activitypub-server/
freenode has #litepub (and #mastodon) with a number of existing implementors who can help.
This is something I have been thinking about for a while. Difficulty wise it would be easiest to just share things with other users on the same instance but obviously this would cripple self hosted versions. I’m open to putting in the extra work to implement ActivityPub but the other problem I have is that going decentralized removes my ability to control content, the only reason this would be an issue is to remove cheaters from the global leaderboards. In an application like this cheating is super easy but going federated makes it very hard for me to control. Although even centralized systems can’t really stop cheating if its within the realm of a possible time. I might just not include global leaderboards at all and have it as personal, friends and groups only.
This is cool and as others have said adding
activitypub to support federation would be the cherry on top of the cake.
The ability to have a private self-hosted instance for your team and/or group of friends is great, but having the ability to connect with people from outside or even just a restricted set of other trusted instances, while still having control over your data, would make it truly awesome. With this setup the problem with cheating should be manageable, it would be very hard to have and maintain
global leader-boards anyway.
I’m thinking about just not having global leaderboards. From what I have read on the strava engineering blog, these are massively complex and cpu intensive as the dataset grows. I am interested in ActivityPub so after I have the core features done that will be the first thing I look at.
In a federated setup, there would be one global leaderboard per instance, showing the aggregated leaderboard of this instance+instances federated with this instance, correct?
Each instance admin could then remove cheaters or ban instances that allow cheating, or subscribe to a cheater blacklist that you’d publish?
I wanted to make this for a few reasons:
I’ve still got some cleanup to do on this crate, and I have a fairly simple in-memory filesystem server that I’ve been using for testing that I’ll publish soon as a separate crate.
I had also started working on my own version, but never got far. So thank you for this. Do you have a link to that other implementation?
They stopped working on it a while ago, and I took over the crate name on crates.io– they’ve since deleted the repo, but you can browse the source from docs.rs:
FWIW, I believe crosvm (the ChromeOS hypervisor written in Rust) has an implementation of 9p: https://chromium.googlesource.com/chromiumos/platform/crosvm/+/refs/heads/master/p9/
File sharing between the host and the guest I believe. My understanding is that 9p is a relatively common choice for that, for whatever reason.
Ah, that makes sense. That’s because there’s 9p support built into the linux kernel for it, and it can work with virtio to provide very low overhead for the file sharing.
While functional programming languages like Haskell are conducive to modularity and otherwise generally good software engineering practices, they are unfit as implementation languages for what I will call interactive systems. These are systems that are heavily IO bound and must provide some sort of guarantee with regard to response time after certain inputs. I would argue that the vast majority of software engineering is the engineering of interactive systems, be it operating systems, GUI applications, control systems, high frequency trading, embedded applications, databases, or video games. Thus Haskell is unfit for these use cases. Haskell on the other hand is a fine implementation language for batch processing, i.e. non-interactive programs where completion time requirements aren’t strict and there isn’t much IO.
It’s not a dig at Haskell, this is an intentional design decision. While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program. These are design trade-offs, not strict wins.
While languages like Python/Java remove the need to consider memory allocation, Haskell takes this one step further and removes the need to consider the sequential steps required to execute a program.
Haskell makes it necessary to explicitly mark code which must be performed in sequence, which, really, is a friendlier way of doing things than what C effectively mandates: In C, you have to second-guess the optimizer to ensure your sequential code stays sequential, and doesn’t get reordered or removed entirely in the name of optimization. When the
IO monad is in play, the Haskell compiler knows a lot of its usual tricks are off-limits, and behaves itself. It’s been explicitly told as much.
Rust made ownership, previously a concept which got hand-waved away, explicit and language-level. Haskell does the same for “code which must not be optimized as aggressively”, which we really don’t have an accepted term for right now, even though we need one.
The optimiser in a C implementation absolutely won’t change the order in which your statements execute unless you can’t observe the effect of such changes anyway. The definition of ‘observe’ is a little complex, but crucially ‘my program is faster’ isn’t an observation that counts. Your code will only be reordered or removed in the name of optimisation if such a change is unobservable. The only way you could observe an unobservable change is by doing things that have no defined behaviour. Undefined behaviour exists in Haskell and Rust too, in every language.
So I don’t really see what this has to do with the concept being discussed. Haskell really isn’t a good language for expressing imperative logic. You wouldn’t want to write a lot of imperative logic in Haskell. It’s very nice that you can do so expressively when you need to, but it’s not Haskell’s strength at all. And it has nothing to do with optimisation.
What if you do it using a DSL in Haskell like Galois does with Ivory? Looks like Haskell made their job easier in some ways.
Still part of Haskell and thus still uses Haskell’s awful syntax. Nobody wants to write
a <- local (ival 0). or
b' <- deref b; store a b' or
n `times` \i -> do when they could write
int a = 0;,
a = *b; or
for (int i = 0; i < n; i++).
“Nobody wants to”
You’re projecting your wishes onto everybody else. There’s piles of Haskell code out there, many DSL’s, and some in production. Clearly, some people want to even if some or most of us don’t.
Im not confused. Almost all languages fail getting virtually no use past their authors. Next step up get a few handfuls of code. Haskell has had piles of it in comparison plus corporate backing and use in small scale. Then, there’s larger scale backings like Rust or Go. Then, there’s companies with big market share throwing massive investments into things like .NET or Java. There’s also FOSS languages that got lucky enough to get similarly high numbers.
So, yeah, piles of code is an understatement given most efforts didnt go that far and a pile of paper with source might not cover the Haskell out there.
I don’t care how popular Haskell is compared to the vast majority of languages that are used only by their authors. That’s completely irrelevant to the discussion at hand.
Haskell is not a good language for expressing imperative concepts. That’s plainly and obviously true. Defending it on the basis that it’s widely used ignores that firstly languages aren’t better simply because they’re widely used, secondly that languages can be widely used without necessarily being good at expressing imperative concepts, and thirdly that Haskell isn’t widely used.
int a = 0 is okay, but not great.
a = *b is complete gobbledygook that doesn’t look like anything unless you already know C, but at least it’s not needlessly verbose.
for (int i = 0; i < n; i++) is needlessly verbose and it looks like line noise to anyone who doesn’t already know C. It’s a very poor substitute for actual iteration support, whether it’s
n.times |i| or
for i in 0..n or something else to express your intent directly. It’s kind of ridiculous that C has special syntax for “increment variable by one and evaluate to the previous value”, but doesn’t have special syntax for “iterate from 0 to N”.
All of that is kind of a minor nit pick. The real point is that C’s syntax is not objectively good.
How in the world are people unfamiliar with ruby expected to intuit that n.times|i| means replace i with iterative values up to n and not multiply n times i?
You do know C. I know C. Lots of people know C. C is well known, and its syntax is good for what it’s for.
a = *b is not ‘gobbledygook’, it’s a terse way of expressing assignment and a terse way of expressing dereferencing. Both are very common in C, so they have short syntax. Incrementing a variable is common, so it has short syntax.
That’s not ridiculous. What I am saying is that Haskell is monstrously verbose when you want to express simple imperative concepts that require a single character of syntax in a language actually designed around those concepts, so you should use C instead of Haskell’s weird, overly verbose and syntactically poor emulation of C.
How does Haskell allow you to explicit mark code that must be performed in sequence? Are you referring to seq? If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad. This sort of thing creates a burden when programming Haskell, at least for me. I don’t want to have to constantly wonder if I’ll need to port my elegant functional code into sequential IO Monad form in the future. C++/Rust address this sort of decision paralysis via “zero-cost abstractions,” which make them both more fit to be implementations languages, according to my line of reasoning above.
Personally, I dislike discussions involving “the IO Monad”. The key point is that Haskell uses data flow for control flow (i.e. it’s lazy). We can sequence one thing after another by adding a data dependency (e.g. making
bar depend on the result of
foo will ensure that it runs afterwards).
Since Haskell is pure, compilers can understand and optimise expressions more thoroughly, which might remove ‘spurious’ data dependencies (and therefore sequencing). If we want to prevent that, we can use an abstract datatype, which is opaque to the compiler and hence can’t be altered by optimisations. There’s a built-in datatype called
IO which works well for this (note: none of this depends at all on monads).
The trouble is that oftentimes when you’re building time-sensitive software (which is almost always), it’s really inconvenient if the point at which a function is evaluated is not clear from the source code. Since values are lazy, it’s not uncommon to quickly build up an entire tree of lazy values, and then spend 1-2 seconds waiting for the evaluation to complete right before the value is printed out or displayed on the screen.
You could argue that it’s a matter of setting correct expectations, and you’d be right, but I think it defeats the spirit of the language to have to carefully annotate how values should be evaluated. Functional programming should be about functions and pure computation, and there is no implicit notion of time in function evaluation.
I agree that Haskell seems unsuitable for what is generally called “systems programming” (I’m currently debugging some Haskell code that’s been over-complicated in order to become streaming). Although it can support DSLs to generate suitable code (I’ve not experience with that though).
I was just commenting on using phrases like “the IO Monad” w.r.t. evaluation order, etc. which is a common source of confusion and hand-waving for those new to Haskell, or reading about it in passing (since it seems like (a) there might be something special about
IO and (b) that this might have something to do with Monads, neither of which are the case).
building time-sensitive software (which is almost always)
Much mission-critical software is running in GC’d languages whose non-determinism can kick in at any point. There’s also companies using Haskell in production apps that can’t be slow. At least one was using it specifically due to its concurrency mechanisms. So, I don’t think your “almost always” argument holds. The slower, less-predictable languages have way too much deployment for that at this point.
Even time-sensitive doesn’t mean what it seems to mean outside real-time since users and customers often tolerate occasional delays or downtime. Those they don’t might also be fixed with some optimization of those modules. Letting things be a bit broken fixing them later is default in mainstream software. So, it’s not a surprise it happens in lots of deployments that supposedly are time-critical as a necessity.
In short, I don’t think the upper bounds you’ve established on usefulness match what most industry and FOSS are doing with software in general or timing-sensitive (but not real-time).
Yeah it’s a good point. There certainly are people building acceptably responsive apps with Haskell. It can be done (just like people are running go deployments successfully). I was mostly speaking from personal experience on various Haskell projects across the gamut of applications. Depends on cost / benefit I suppose. For some, the state of the art type system might be worth the extra cycles dealing the the occasional latency surprise.
The finance people liked it because it was both closer to their problem statements (math-heavy), the apps had lower defects/surprises vs Java/.NET/C, and safer concurrency. That’s what I recall from a case study.
If you’re referring to the IO Monad, it’s a fair point, but I think generally it’s considered bad practice to default to using the IO monad
Lmao what? You can define
>>= for any data type effectively allowing you to create a DSL in which you can very precisely specify how the elements of the sequence combine with neat
Yes that’s exactly the problem to which I’m referring: Do notation considered harmful. Also do notation isn’t enough to specify evaluation sequencing since values are lazy. You must also carefully use
Ah well I use a Haskell-like language that has strict-by-default evaluation and seems to be able to address a lot of those other concerns at least by my cursory glance:)
Either way the benefits of do, in separating the logic and execution of procedures, look great to me. But I may be confusing them with the benefits of dependent typing, nevertheless the former facilitates the latter when it comes to being able to express various constraints on a stateful system.
Oh yeah. It’s mostly historical. They dropped the work for next project. Then dropped that for even better one. We get some papers and demos out of it.
Exactly! Even more so, there’s a lot of discussion of how to balance the low-level access against Haskell’s high-level features. They did this using the H Layer they describe in some of their papers. It’s basically like unsafe in Rust where they do the lowest-level stuff in one way, wrap it where it can be called by higher-level Haskell, and then do what they can of the rest in Haskell. I figured the concepts in H Layer might be reusable in other projects, esp safe and low-level. The concepts in Habit might be reusable in other Haskell or non-Haskell projects.
It being old doesn’t change that. Good example is how linear logic was in the 1980’s, That got used in ML first I think years later, then them plus singleton types in some safer C’s in the 2000’s, and an affine variant of one of them in Rust. They make a huge splash with “no GC” claim. Now, linear and affine types are being adapted to many languages. The logic is twenty years old with people talking about using it for language safety for 10-20 years. Then, someone finds it useful in a modern project with major results.
Lots of things work that way. It’s why I submit older, detailed works even if they have broken or no code.
none of the examples of “interactive systems” you mention are nomally io bound. sub-second response time guarantees, otoh, are only possible by giving up gc, and use a real-time kernel. your conclusion that Haskell is unusable for “these use cases” seems entirely unfounded. of course, using Haskell for real time programming is a bad idea, but no less bad than anything that’s, essentially, not C.
I’ve had a few personal experiences writing large Haskell applications where it was more trouble than I thought it was worth. I regularly had to deal with memory leaks due to laziness and 1-5 second stalls at io points where large trees of lazy values were evaluated last minute. I said this in another thread: it can be done, it just requires a bit more effort and awareness. In any case, I think it violates the spirit of Haskell programming to have to carefully consider latency issues, GC times, or lazy value evaluation when crafting pure algorithms. Having to trade off abstraction for performance is wasteful IMO, i think Rust and C++ nail this with their “zero cost abstractions.”
I would label most of those systems IO bound. My word processor is normally waiting on IO, so is my kernel, so is my web app, so is my database, so is my raspberry pi etc.
I guess I’m picking nits here, but using lots of working memory is not “memory leaks”, and a program that is idling due to having no work to perform is not “io bound”. Having “to carefully consider latency issues, GC times, [other tradeoffs]” is something you have to do in every language. I’d venture that the ability to do so on a subconcious level is what distinguishes a skilled developer from a noob. This also, I think, plays a large part in why it’s hard for innovative/weird languages to find adoption; they throw off your sense of how things should be done.
Yes you have to consider those things in all languages which is precisely my point. Haskell seeks to abstract away those details but if you want to use Haskell in any sort of “time-sensitive” way, you have to litter your pure, lazy functional code with annotations. That defeats the purpose of the language being pure and lazy.
And yes, waiting on user input does make your program IO bound. If your program is spending more time waiting on IO and less time churning the CPU, it is IO bound. IO bound doesn’t simply mean churning the disk.
I brought that up before as a counterpoint to using Haskell. A Haskeller gave me this link which is a setting for making it strict by default. Might have helped you out. As a non-Haskeller, I can’t say if it makes the language harder to use or negates its benefits. Worth looking into, though, since it was specifically designed to address things like bang patterns that were cluttering code.
So I’m pretty familiar with how cool befs is, but what are the other advantages of beos over unix?
Here’s the 10,000ft view. Main advantage was performance through pervasive concurrency. My current box is Linux on an Intel Celeron. It’s responsiveness is worse than BeOS was on a Pentium due to inferior architecture. My favorite demonstration is this clip where they throw everything they can at the machine. Notice that (a) it doesn’t crash immediately vs OS’s of the time and (b) graceful degradation/recovery. I still can’t do that on crap hardware with Ubuntu without lots of lagging or something.
When I used to demo BeOS for folks in NYC, I would throw even more at it than what is in the demo. It was amazing how good BeOS was. I’ve never had a better day to day OS in terms of stability and responsiveness. Even when it was in beta and I had to kill off servers from time to time, they would pop right back up and everything would keep going.
I rarely hear something is better than the demo in practice. Wow. It’s architecture is worth copying today. I don’t know how close HaikuOS is to the architecture and stability under load. They might be doing their own thing in some places.
Anyone copying or reusing its principles today also has tools like Chapel, Rust, and Pony that might make the end result even better in both performance and stability. QNX and BeOS were the two I wanted cloned the most into a FOSS desktop. I hate freezes, crashes and data loss. Aside from hardware failure, no reason we should have to deal with them any more.
The relatively limited time that I got to use QNX was pretty nice. It was limited in some areas but from a stability and responsiveness standpoint, it was a joy to use.
If you’re curious, John Nagle described how it balanced performance and stability here. He’s constantly encouraging a copy of its design on places like HN. I did find an architectural overview from the company itself in 1992.
EDIT: Another person on HN described what the desktop experience was like. That person’s main memory was how big compiles would slow down their main workstation but not QNX desktop. It’s real-time design, maybe built-in priorities, made sure the UI parts ran immediately despite heavy load from other processes. The compiles got paused just enough for whatever he was doing. That sounded cool given one app can drag down my whole system or interrupt my text to this day.
Back in the mid-90s I was hired to port a bunch of Unix programs to QNX. What blew me away about QNX was the network transparency in the command line. I could run a program on my machine A, loading a file from B, piping the output to a program that lives on C but run it on D and pipe that output to a local printer hooked to E, all from the command line. My boss would regularly use the modem attached to my machine from his machine (in the office next to mine).
Now, this meant that all machines had to have the same accounts install on all the machines, and the inter-processing message passing was done over Ethernet, not IP, so it was limited to a single segment. Such a setup might not fly that well in these more security-conscience days.
As far as speed goes, QNX was fast. I had friends that worked at a local company that sold commercial X servers and they said that the fastest X servers they had all ran on QNX.
That all sounds awesome. I wonder if it’s still that fast on something like Intel Core CPU’s given how hardware changed (eg CPU vs memory bottlenecks). Some benchmarks would be interesting against both Linux and L4-based systems.
If it held up, then someone should definitely clone and improve on it. Alternatively, port its tricks to other kernel types or hypervisors.
I cant keep up with PC-BSD branding and structures changes anymore, BSD distributions is a weird concept.
Its just one company (iXsystems) that makes you think that.
Think about NetBSD/OpenBSD/FreeBSD and for how long they do not have any ‘distribution’ perturbations …
The iXsystems have very solid FreeNAS and TrueNAS distributions but they ‘struggle’ with PC-BSD/TrueOS Desktop and Servers editions, its quite good explained in the article actually :)
I understand why the changes happened. I just question if its really needed. Doesnt it clash with the usual bsd consistency you get by having the kernel and userland developed in unison?
You mean by incorporating LibreSSL instead of OpenSSL and OpenRC instead of rc by TrueOS?
Alternatives are good, I did not liked PC-BSD and I find Lumina ugly but that does not mean that ‘idea’ of Lumina is bad. Its the only BSD oriented DE currently.
XFCE or MATE work well on OpenBSD or FreeBSD but little ‘closed’ intergration may help (or may develop standard way of such interaction between DE and FreeBSD that laters XFCE or MATE would adopt it).
IMHO GhostBSD does better job as being ‘desktop’ FreeBSD but its still not perfect eigther …
Not a great question pool IMHO Many of these questions are biased toward a person with a very specific personal profile. Asking about a “home lab” is disqualifying of people who don’t spend weekends terminating Cat5 or reimaging old Dell servers from eBay. A lot are outdated, e.g. the difference between a “router” and a “gateway” (I’ve seen that for 15+ years and it didn’t make sense then either). Others are uselessly vague, particularly “what is redirection?” with no context.
The test ended up being an actual in terminal test with specific goals and full access to man pages and the internet. So it wasnt anything like these questions (though they were helpful to distract me the night before). It went pretty well I think!
So on automatically redirecting to https. Is this something we should really be doing? (For static website with no forms)
I want my website to be easy to download even in the most backwards recesses of northern quebec. Some places still have dialup. So I offer both http and https, just use a plugin or enter the right address if you want https.
I feel like I’ve put too many tags, but there isnt a catchall tag for systems administrations like there is for programming.
I have a linux / bsd sys admin interview on tuesday morning, so I’m looking up questions.
I wonder how using a combination of entr(1) and stack repl would feel? Or is spawning ghci too heavy to do it over and over again.
I dont post often, but I did redo my website recently and posted it here a few days ago.
“A practical and portable Scheme system”
From the website
(The post and the linked email didn’t give me any idea what Chicken was.)
The way I remember it is it’s the Scheme that compiles to C for speed and portability. silentbicycle posted this interview with the author. aminb added someone’s blog posts on interesting work.
Nice work, and the tool looks nice !
I really like the simpleness of your website, making it awesome to browse, with no clutter. Just what’s required.
Thanks! That was the primary goal.
SSG fulfills my secondary goal, which is to make it simpler to maintain.
Not using an external set of libraries to build your ui does not mean your internal abstractions do not form their own unique framework..
But good for them!
I’ve had the Little Schemer sitting on my shelf for far too long. I really need to go through it. I wonder if I can go through the book using CHICKEN.
CHICKEN happens to provide everything you need by default, but any modern implementation will do. You may need to define a few small procedures as you go if your platform doesn’t already have them –
sub1from memory, and possibly others – but recent editions include definitions for these and actually executing the programs is almost beside the point anyway, for the first half of the book at least.
Probably, although in my experience every exercise is perfectly doable on paper and gives a welcome break from the screen.
I also prefer going through those exercises on paper. But like @evhan said, you can use CHICKEN, just like about any other Scheme (with a handful of small extra definitions that are nonstandard; IIRC those are mentioned in the preface).
Personally I’d love to see a breakdown of what it takes to go through The * Schemer and SICP outside of racket land. I’ll have to give CHICKEN a try!
I’m not sure what would be needed for SICP or the other schemers, but I’ll keep that in mind.
As for the little schemer, the book itself provides the definition / implementation for every function as you go, except for a few exceptions, such as quoting and define, but it does tell you how they are called in both scheme and common lisp.