1.  

    I really like how the author approaches programming, and editors from a usability perspective. He is very articulate and did some through research on the history of Lisp editors, recreating the key interaction to illustrate how they worked.

    1. 2

      I personally don’t like standups but we do them so that designers know what devs do and vice-versa. It’s an opportunity to keep everyone in sync (we’re about 10 people). We do it once a week and keep it short. The article seems to skip over the fact that teams are not always performing well, or even just functional – processes and methodologies are necessary tools to help smooth out changes in mood and composition of the team. That being said, it’s good to see people questioning now established practices.

      1. 3

        This is just the traditional weekly team meeting.

      1. 9

        I’m not as involved as I used to be, but I’m still on the core team, so feel free to ask me questions if you’ve got any.

        1. 3

          Would you recommend Factor for production use given that it seems to be reaching a sort of plateau in support and community?

          It’s a beautiful language, by the way. Thank you for your work.

          1. 5

            I have Factor running in production. Although I don’t really maintain the web app much - it just ticks along - Factor runs tinyvid.tv and has for the past few years. I originally wrote it to test HTML 5 video implementations in browsers back when I worked on the Firefox implementation of video.

            1. 5

              As always, it depends on what you’re doing—I’d definitely be nervous if you told me you were shoving Factor into an automobile, for example—but Factor the VM and Factor the language are both quite stable and reliable. On top of doublec’s comment, the main Factor website runs Factor (and its code is distributed as part of Factor itself for your perusal), and it’s been quite stable. (We do occasionally have to log in and kick either Factor or nginx, but it’s more common that the box needs to be rebooted for kernel updates.) I likewise ran most of my own stuff on Factor for a very long time, including some…maybe not mission-critical, but mission-important internal tooling at Fog Creek. And finally, we know others in the community who are building real-world things with Factor, including a backup/mirroring tool which I believe is being written for commercial sale.

              The two main pain-points I tend to hit when using Factor in prod are that I need a vocabulary no one has written, or that I need to take an existing vocabulary in a new direction and have to fix/extend it myself. Examples are our previous lack of libsodium bindings (since added by another contributor) and our ORM lacking foreign key support (not a huge deal, just annoying). Both of these classes of issues are increasingly rare, but if you live in a world where everything’s just a dependency away, you’ll need to be ready for a bit of a change.

              You can take a look at our current vocab list if you’re curious whether either of the above issues would impact anything in particular you have in mind.

            2. 1

              What would you say is Factor’s best application domain, the kind of problem it solves best? I met Slava many years ago when he was presenting early versions of Factor to a local Lisp UG, and am curious to see where the language fits now, both in theory and practice.

              1. 4

                My non-breezy answer is “anything you enjoy using it for.” There are vocabularies for all kinds of things, ranging from 3D sound to web servers to building GUIs to command-line scripts to encryption libraries to dozens of other things. Most of those were written because people were trying to do something that needed a library, so they wrote one. I think the breadth of subjects covered speaks well to the flexibility of the language.

                That all said, there are two main areas where I think Factor really excels. The first is when I’m not really sure how to approach something. Factor’s interactive development environment is right up there with Smalltalk and the better Common Lisps, so it’s absolutely wonderful for exploring systems, poking around, and iterating on various approaches until you find one that actually seems to fit the problem domain. In that capacity, I frequently use it for reverse-engineering/working with binary data streams, exploring web APIs, playing with new data structures/exploring what high-level design seems likely to yield good real-world performance, and so on.

                The second area I think Factor excels is DSLs. Factor’s syntax is almost ridiculously flexible, to the point that we’ve chatted on and off about making the syntax extension points a bit more uniform. (I believe this branch is the current experimental dive in that direction.) But that flexibility means that you can trivially extend the language to handle whatever you need to. Two silly/extreme examples of that would be Smalltalk and our deprecated alternative Lisp syntax (both done as libraries!), but two real examples would be regular expressions, which are done as just a normal library, despite having full syntax support, or strongly typed Factor, which again is done at the library level, not the language level. I have some code lying around somewhere where I needed to draft up an elaborate state machine, and I quickly realized the best path forward was to write a little DSL so I could just describe the state machine directly. So that’s exactly what I did. Lisps can do that, but few other languages can.

              2. 1

                Were native threads added in this release, or are there plans to? And did anything ever come to fruition with the tree shaker that Slava was working on way back when?

                Major props on the release. It’s really nice to see the language survive Slava disappearing into Google.

                1. 5

                  The threads are still green threads, if that’s what you’re asking, but we’ve got a really solid IPC story (around mailboxes, pattern matching, Erlang-style message passing, etc.), so it’s not a big deal to fire up a VM per meaningful parallel task and kick objects back and forth when you genuinely need to.

                  In terms of future directions, I don’t know we’ve got anything concrete. What I’d like to do is to make sure the VM is reentrant, allow launching multiple VMs in the same address space, and then make the IPC style more efficient. That’d make it a lot easier to keep multithreaded code safe while allowing real use of multiple cores. But that’s just an idea right now; we’ve not done anything concrete that direction, as far as I know.

                  1. 1

                    Really off-topic, but isn’t Slava at Apple?

                    1. 1

                      He is now. Works on Swift.

                  2. 1

                    Where does the core factor team typically communicate these days? #concatenative on freenode seems kinda dead these days. Is there a mailing list, or on the yahoo group?

                  1. 1

                    Now I’m very interested about how that would impact overall performance in practice.

                    1. 4

                      I use a blank version of the Truly Ergonomic Split Keyboard with a RollerMouse Red and I find it very ergonomic. Everything is accessible within limited hand motion, the palm rests of the RollerMouse make it very comfortable. The only complaint I have about the TEK is that the keycaps are not very durable (at least with the version I have), but I managed to find replacements for most of them through Signature Plastics. Here’s what the setup looks like.

                      1. 4

                        I think I first read about Icon via Laurence Tratt’s Converge, which borrowed the idea of goal directed execution: http://tratt.net/laurie/research/pubs/html/tratt__experiences_with_an_icon_like_expression_evaluation_system/.

                        Correctly or incorrectly, Tratt concluded that backtracking in Icon was more difficult to use than one might hope, and reduced its scope in his language.

                        1. 2

                          It’s a very clearly written article, I did not know about the goal directed execution, and the description is very approachable. It’s an interesting variant on handling control flow.

                        1. 5

                          I use HJSON in place of Yaml and found that it offers the same benefits (Pythonic, structured configuration), without the problems (no tabs, complex documentation). It’s a little format that should be more well known!

                          1. 6

                            We’ve been using Pulumi as part of the private beta for a couple weeks now. We found out about it right at the tail end before it was opened up. So far, loving it. @pzel has been doing most of the work with it so far and might be able to give folks details if they are interested.

                            1. 1

                              Thanks for sharing, the documentation left me a bit confused, so knowing it works in a real world scenario makes me want to know more about it.

                              1. 2

                                What won me over is the fact that it really is infrastructure-as-code – with emphasis on code. This gives me hope that the ever-encroaching complexity of configuration management can be corralled using tactics developers know from ‘regular programming’: refactoring, abstraction, etc.

                                1. 1

                                  This would actually be a very good argument for using a Lisp/Scheme flavor (or Lua, for that matter) as a default configuration language for any tool, instead of all then INI, XML, Toml, Yaml, JSON and others. I think GNU kind of tried to put Guile everywhere, at least on the desktop programs.

                                  1. 3

                                    Perhaps there is an alternate reality somewhere where people are not allergic to S-Expressions and XML/Json & friends were never invented. I’m not holding my breath, though. Also: Guile + Guix are beautiful tools, and I’d love to see them used more. Alas, that isn’t the case :(

                                    For what it’s worth, Pulumi lets you drive the engine using any language that can speak gRPC, so there’s really no technical reason why a Scheme or Lua front-end to it can’t be built.

                                    1. 1

                                      Ha! That would be nice :) I find XML an interesting case: the early drafts of XSLT were like Lisp, so people thought about that and then backtracked. That being said, I find XML/XSLT a very powerful combination, although the ergonomics are rather questionable.

                            1. 4

                              I would add Zig to the list of promising systems language, and also probably Jonathan Blow’s Jay language.

                              1. 5

                                The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages. One could argue that you should solve the problem on paper first and sketch out the types before writing the implementation, but I find it a good example of how dynamic languages shift our expectations in terms of programming ergonomics.

                                1. 19

                                  I’d be very curious to hear what situations you’ve encountered where you were prototyping a solution that you understood well enough to turn into code, but not precisely enough to know its types? I’ve personally found that I can’t write a single line of code – in any language, static or dynamic – without first answering basic questions for myself about what kinds of data will be flowing through it. What questions do you find the language is forcing you answer up-front that you would otherwise be able to defer?

                                  1. 7

                                    When I have no idea where I’m going I sometimes just start writing some part of the code I can already foresee, but with no clue how anything around it (or even that part itself) will end up looking in the final analysis. I have no data structures and overall no control flow in mind, only a vague idea of what the point of the code is.

                                    Then with lax checking it’s much easier to get to where I can run the code – even though only a fraction of it even does anything at all. E.g. I might have some function calls where half the parameters are missing because I didn’t write the code to compute those values yet, but it doesn’t matter: either that part of the code doesn’t even run, or it does but I only care about what happens before execution gets to the point of crashing. Because I want to run the stuff I already have so I can test hypotheses.

                                    In several contemporary dynamic languages, I don’t have to spend any time stubbing out missing bits like that because the compiler will just let things like that fly. I don’t need the compiler telling me that that code is broken… I already know that. I mean I haven’t even written it yet, how could it be right.

                                    And then I discover what it is that I even wanted to do in the first place as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing as I try to fill in the parts that I discover are missing… etc. Structures turn out to repeat as the code grows, or bits need to cross-connect, so I discover abstractions suggesting themselves, and I gradually learn what the code wants to look like.

                                    The more coherent the code has to be to compile, the more time I have to spend stubbing out dummy parts for pieces of the code I don’t even yet know will end up being part of the final structure of the code or not.

                                    It would of course be exceedingly helpful to be able to say “now check this whole thing for coherence please” at the end of the process. But along the way it’s a serious hindrance.

                                    (This is not a design process to use for everything. It’s bottom-up to the extreme. It’s great for breaking into new terrain though… at least for me. I’m terrible at top-downing my way into things I don’t already understand.)

                                    1. 4

                                      That’s very interesting! If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both. So instead of sketching ideas for your program on a napkin, or on a whiteboard, or in a scratch plaintext file, you can do that exploration using a notation which is both familiar to you and easy to adapt into an actual running program. Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational? And that the parts of your program whose types you’re less confident about are also the parts you aren’t quite ready to execute yet?

                                      If so, then I think our processes are actually quite similar. I mainly program in languages with very strict type systems, but when I first try to solve a problem I often start with a handwritten sketch or plaintext pseudocode. Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like, so that I’ll be able to easily adapt it when the time comes. But either way, we’re both bypassing any kind of correctness checking until we actually know what it is we’re doing, and only once we reach a certain level of confidence do we actually run or (if the language supports it) typecheck our solution.

                                      Let me know if I’ve missed something about your process, but I think I understand the idea of using dynamic languages for prototyping much more clearly now. What always confused me is that the runtime semantics and static types (whether automatically checked or not) of a program seem so tightly coupled that it would be nearly impossible to figure one out without the other, but you seem to be suggesting that when you’re not sure about the types in a section of your program, you’re probably not sure about it’s exact runtime semantics either, and you’re keeping it around as more of a working outline than an actual program to be immediately run. So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

                                      1. 3

                                        If I’ve understood you correctly, your prototyping approach seems to allow you to smoothly transform non-executable sketches into executable programs, by using the same syntax for both.

                                        Yup.

                                        Would it be correct to say that by the time you actually run a piece of code, you have a relatively clear idea of what types of data are flowing through it, or at least the parts of it that are actually operational?

                                        Well… it depends. For the parts that are most fully written out, yes. For the parts that aren’t, no. Neither of which are relevant when it comes to type checking, of course. But at the margins there is this grey area where I have some data structures but I only know half of what they look like. And at least one or two of them shift shape completely as the code solidifies and I discover the actual access patterns.

                                        If so, then I think our processes are actually quite similar.

                                        Sounds like it. I’d wonder if the different ergonomics don’t still lead to rather different focus in execution (what to flesh out first etc.) so that dynamic vs static still has a defining impact on the outcome. But it sure sounds like there is a deep equivalence, at least on one level.

                                        Now that I think about it, I realize that I often subconsciously try to keep the notation of those sketches as close as possible to what the eventual executable code might look like

                                        Seems natural, no? 😊 The code is ultimately what you’re trying to get to, so it makes sense to keep the eventual translation distance small from the get-go.

                                        So even in that early phase, types and semantics are still “coupled,” but only in the sense that they’re both incomplete!

                                        I had never thought about it this way, but that sounds right to me as well.

                                    2. 4

                                      You didn’t ask me but I’ll answer anyway because I’d like your advice! I am currently prototyping a data processing pipeline. Raw sensor data comes in at one end then is processed by a number of different functions, each of which annotates the data with its results, before emitting the final blob of data plus annotations to the rest of the system. As a concrete example, if the sensor data were an image, one of the annotations might be bounding boxes around objects detected in the image, another might be some statistics, etc.

                                      At this stage in the design, we don’t know what all the stages in the pipeline will need to be. We would like to be able to insert new functions at any stage in the pipeline. We would also like to be able to rearrange the stages. Maybe we will reuse some of these functions in other pipelines too.

                                      One way to program this is the “just use a map” style promoted by Clojure. Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function. So each function will accept data that it doesn’t recognize and just pass it on. This makes everything nicely composable and permits the easy refactoring we want.

                                      How would this work in a statically typed system? If the pipeline consists of three functions A, B then C, doesn’t B have to be typed such that it only accepts the output of A and produces the input of C? What happens when we add another function between B and C? Or switch the order so A comes last?

                                      What would the types look like anyway? Each function needs to output its input plus a bit more: in an OOP language, this quickly becomes a mess of nested objects. Can Haskell do better?

                                      Since I cannot actually use Clojure for this project, I’d welcome any advice on doing this in a statically typed language!

                                      1. 3

                                        In my experience statically typed languages are generally very good at expressing these kinds of systems. Very often, you can express composable pipelines without any purpose-built framework at all, just using ordinary functions! You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values. This approach requires you to explicitly specify your pipeline’s dependency graph, which in my experience is actually extremely valuable because it allows you to understand the structure of your program at a glance. The simplicity of this approach makes it easy to maintain and guarantees perfect type safety.

                                        That said, based on your response to @danidiaz, it sounds like you might be doing heavier data processing than a single thread running on a single machine will be able to handle? In that case, depending on the exact kind of processing you’re doing, it’s still possible that you can implement some lightweight parallelism at the function call level without departing too much from modeling your pipeline as an ordinary sequence of function calls. Ordinary (pure) functions are also highly reusable and don’t impose any strong architectural constraints on your system, so you can always scale to a more heavily multi-threaded or distributed environment later without having to re-implement your individual pipeline stages.

                                        If you do have to run your system across multiple processes or even multiple machines, then it is definitely harder to express a solution in a type-safe way. Most type systems don’t currently work very well across process or machine boundaries, and a large part of this difficulty stems from that it is inherently challenging to statically verify the coherence of a system whose constituent components might be independently recompiled and replaced while the system is running. I’m not sure how your idiomatic Clojure solution would cope with this scenario either, though, so I’d be curious to learn more about exactly what the requirements of this system are. These kinds of questions often turn out to be highly dependent on subtle details, so I’d be interested to hear more about your problem domain.

                                        1. 2

                                          You can write your entire pipeline as a function calling out to many sub-functions, passing just the relevant resources through each function’s arguments and return values.

                                          That’s basically what Cleanroom does it its “box structures” that decompose into more concrete boxes. Just functional decomposition. It has semi-formal specifications to go with it plus a limited set of human-verifiable, control-flow primitives. The result is getting things right is a lot easier.

                                          1. 1

                                            Thank you. Just to emphasize, we are talking about prototyping here. The system I am building is being built to explore possibilities, to find out what the final system should look like. By the time we build the final system, we will have much stricter requirements.

                                            I am working on an embedded system. We have limited processing capability on the device itself. We’d like to do as much processing as we can close to the sensors but, we think, we will probably need to off load some of the work to remote systems (e.g. the “cloud”). We also haven’t fixed precisely what on-board processing capability we will have. Maybe it will turn out to be more cost-effective to have a slightly more powerful on-board processor, or maybe it will be helpful to have two independent processors, or maybe lots of really cheap processors, or maybe we should off-load almost everything. I work in upstream research so nothing is set in stone yet.

                                            Furthermore, we don’t know precisely what processing we will need to do in order to achieve our goals. Sorry for being vague about “processing” and “goals” here but I can’t tell you exactly what we’re trying to do. I need to be able pull apart our data processing pipeline, rearrange stages, add stages, remove stages, etc.

                                            We aren’t using Clojure. I just happen to have been binge watching Rich Hickey videos recently and some of his examples struck a chord with me. We are using C++, which I am finding extremely tedious. Mind you, I’ve been finding C++ tedious for about twenty years now :)

                                          2. 2

                                            Here every function takes a map, adds its results to the map as new fields, then passes on the map to the next function.

                                            Naive question: why should functions bother with returning the original map? Why not return only their own results? Could not the original map be kept as a reference somewhere, say, in a let binding?

                                            1. 4

                                              If your functions pass through information they don’t recognize - i.e. accept a map and return the same map but with additional fields - then what is to be done is completely decoupled from where it is done. You can trivially move part of the pipeline to a different thread, process or across a network to a different machine.

                                              You’re absolutely right though, if everything is in a single thread then you can achieve the same thing by adding results to a local scope.

                                              At the prototyping stage, I think it’s helpful not to commit too early to a particular thread/process/node design.

                                        2. 6

                                          I may be too far removed from my time with dynlangs but I’ve always liked just changing a type and being able to very rapidly find all places it matters when things stop compiling and get highlighted in my editor.

                                          1. 5

                                            The comment on how Swift make prototyping harder by forcing developers to express correct types is spot-on, and would apply to other strongly typed languages

                                            Quick bit of notation: “strongly typed” is a subjective term. Generally people use it to mean “statically typed with no implicit type coercion”, but that’s not universal. People often refer to both C and Python as strongly typed, despite one having implicit coercion and the other not having static types.

                                            1. 1

                                              Thanks for the clarification!

                                          1. 3

                                            I like most of this, save for unquoted string values - as lines like x: true become ambiguous (could be either the string "true" or the boolean true). YAML does this, but with a whole list of words that evaluate to true or false which can easily lead to subtle mistakes.

                                            1. 1

                                              I found that in practice it works surprisingly well, and is less error prone than Yaml – I never got bitten by the unquoted strings, for what it’s worth. It’s a syntax that deserves to be more well known, and a good alternative to *ml languages.

                                            1. 3

                                              Awesome! Thanks for sharing.

                                              wasamasa has experimented with CHICKEN a fair bit and has written some cool blog posts about it.

                                              1. 2

                                                Looked like cool projects. Bookmarked it in case I check out Chicken when trying Scheme again.

                                                1. 2

                                                  Chez Scheme is now open-source, and it’s the best Scheme implementation out there for production work. I find Guile easier to get into than Chicken, but Chicken is must faster than Guile, and has a ton of neat extensions.

                                                  1. 2

                                                    Yeah, it’s on my list, too. It goes way back. The Racket people are also porting to it.

                                              1. 7

                                                The tone of the article makes it read a bit like a rant but reading between the lines, it seems that the author points at reworking the core concepts that are a foundation of widely used OSes. Some of the examples mentioned like print or the teletype are spot-on: we can (and probably should) move past these metaphors from an other era. But then there are some core principles that still work well (everything is a file, text over binary, compostable commands). I’d be curious to see what concepts/abstractions would be ideal for a present day OS designed from the ground up.

                                                1. 23

                                                  This is a bit disappointing. It feels a bit like we are walking into the situation OpenGL was built to avoid.

                                                  1. 7

                                                    To be honest we are already in that situation.

                                                    You can’t really use GL on mac, it’s been stuck at D3D10 feature level for years and runs 2-3x slower than the same code under Linux on the same hardware.

                                                    It always seemed like a weird decision from Apple to have terrible GL support, like if I was going to write a second render backend I’d probably pick DX over Metal.

                                                    1. 6

                                                      I remain convinced that nobody really uses a Mac on macOS for anything serious.

                                                      And why pick DX over Metal when you can pick Vulkan over Metal?

                                                      1. 3

                                                        Virtually no gaming or VR is done on a mac. I assume the only devs to use Metal would be making video editors.

                                                        1. 1

                                                          This is a bit pedantic, but I play a lot of games on mac (mainly indie stuff built in Unity, since the “porting” is relatively easy), and several coworkers are also mac-only (or mac + console).

                                                          Granted, none of us are very interested in the AAA stuff, except a couple of games. But there’s definitely a (granted, small) market for this stuff. Luckily stuff like Unity means that even if the game only sells like 1k copies it’ll still be a good amount of money for “provide one extra binary from the engine exporter.”

                                                          The biggest issue is that Mac hardware isn’t shipping with anything powerful enough to run most games properly, even when you’re willing to spend a huge amount of money. So games like Hitman got ported but you can only run it on the most expensive MBPs or iMac Pros. Meanwhile you have sub-$1k windows laptops which can run the game (albeit not super well)

                                                        2. 2

                                                          I think Vulkan might have not been ready when Metal was first skecthed out – and Apple does not usually like to compromise on technology ;)

                                                          1. 2

                                                            My recollection is that Metal appeared first (about June 2014), Mantle shipped shortly after (by a coupe months?), DX12 shows up mid-2015 and then Vulkan shows up in February 2016.

                                                            I get a vague impression that Mantle never made tremendous headway (because who wants to rewrite their renderer for a super fast graphics API that only works on the less popular GPU?) and DX12 seems to have made surprisingly little (because targeting an API that doesn’t work on Win7 probably doesn’t seem like a great investment right now, I guess? Current Steam survey shows Win10 at ~56% and Win7+8 at about 40% market share among people playing videogames.)

                                                            1. 2

                                                              Mantle got heavily retooled into Vulkan, IIRC.

                                                              1. 1

                                                                And there was much rejoicing. ♥

                                                    1. 2

                                                      I can’t help but think that capabilities would be a great solution to managing memory (as well as other properties). Pony uses capabilities to guard against race conditions (it’s a concurrent language), but I’d love to see the same concepts applied to memory regions and their ownership.

                                                      1. 1

                                                        I believe that was the original use of capabilities in hardware. See Capability-based, Computer Systems. In type systems, Amal Ahmed made linear types more useful by combining them with capabilities. Also found something maybe worth a submission in near future despite being old. Another Lobster tipped me off a while back on phantom types being used for something similar to Ahmed’s work. Just found this that might illustrate possibilities.

                                                        1. 1

                                                          The JaneStreet example is very clear, and I can see how it would be possible to write the equivalent of Pony’s capabilities in OCaml, at the cost of a less expressive syntax, but with the great benefit of being able to define a custom capabilities algebra. Thanks for the references, as always!

                                                      1. 2

                                                        This reminded me of R17 http://www.rseventeen.com/, which is sadly not maintained anymore.

                                                        1. 4

                                                          FRelP seems to me as a more complex and less powerful abstraction than Functional Reactive Programming. The beauty of FRP is the directed graph approach, which is a simple model that provides a good base for optimization and composition. I found MobX to be a relatively good example of a simplified version of FRP’s principles put to work. I should also point out to the Cells (Lisp) project that is one of the earliest implementation of FRP’s principles (some documentation is available here). I understand why FRelP did not get too popular: the mean reason, to me, is that the relational model constraints on how you model and store your data, while FRP works with pretty much anything.

                                                          1. 6

                                                            The fact that Guix is written in Scheme is a big appeal for me as opposed to Nix’s custom language. I preferred Nix as a way to support a standard environment (it has more packages), but this new feature makes the distribution of fat binaries a lot simpler than the other solutions. Less is more!

                                                            1. 1

                                                              FWIW, I tried to dissuade Gentoo from using Bash and Nix from creating their own language, both at basically around the 0.0.1 timeframe. I guess I am not terribly persuasive. Guix and Nix should merge. The separation is kinda ridiculous.

                                                              1. 3

                                                                Guix and Nix should merge.

                                                                Seems like a great idea until you consider Guix’s commitment to freedom, and as a result blobless experience. Unless NixOS adopted that stance as well, the philosophical incompatibility would doom it. Nix adopting guile is more likely, I’d say, especially since guile did have a lua like front end that might make it a bit easier to slowly migrate everything…

                                                                1. 2

                                                                  It is similar to vegetarian and non-vegetarian, one can have a blobless, freedom filled diet and then occasionally should they choose, sprinkle some bin01bits on top.

                                                                  1. 1

                                                                    I upvoted, but as a vegan, I kind of take offense to vegetarians (in a half hearted way, of course), who only “half” commit. But, I recognize that not everyone does it for the animals (even vegans).

                                                                    But, why would you go out of your way to run a completely free system, only to sprinkle some blobbits on it? That completely invalidates the point! That blob, is where the nasty things that disrespect your freedoms are.

                                                                    1. 1

                                                                      you wouldn’t run it for the freeness, but supposedly guix has some other strengths as well

                                                                  2. 1

                                                                    I didn’t realize Guix forbade blobs (though I’m not surprised, given its origin). Is there a with-blob version of Guix? I didn’t see one, but that doesn’t necessarily mean no…

                                                                    1. 1

                                                                      Obviously, you can acquire and install the blobs yourself, and I’m sure there are blog posts around in support of that. But, yeah, it’s like Trisquel, gNewsense, and the others that have similar governance for totally-libre.

                                                                      1. 1

                                                                        I haven’t used it in a long time, but I thought that you could point Guix at the package store from Nix, similar to how you can point Debian at apt repos from other sources. You would have to be really careful with this; I remember early on getting burned because I asked Nix to install Firefox and it gave me Firefox-with-adobe-flash which was pretty gross.

                                                                    2. 3

                                                                      Ha! Well, there must be an alternate universe where you managed to convince them ;) I think they do borrow some ideas and even some code (I remember a FOSDEM talk from Ludovic last year mentioning that). Implementation wise, I would suspect Guix has the upper hand, but the restriction to GNU packages is problematic not you need specific packages.

                                                                  1. 2

                                                                    Functional Reactive Programming is becoming a mainstream paradigm now, many people mention it, both in front-end (the historical domain) and now back-end. MGMT does a great job of reifing it’s concepts into language constructs, and I wish we had a make-like tool based on a similar approach.

                                                                    1. 3

                                                                      I’m surprised Revelation is not mentioned. It’s full featured, can import/export – the only drawback is that it’s not cloud-based and has no mobile/web interface. But for desktop use, it’s really good.

                                                                      1. 1

                                                                        This is part of many Linux distributions - so easy to install. Latest commit f574668 on 20 Sep 2013 on the GitHub repository does not instil confidence that this is maintained.