1. 8

    I am also running Pop OS. I am running a couple of instances within Hyper-V and one on a laptop. Super happy as well.

    Fonts look nice, colors selected well, all up-to-date software installs fine. Upgrade from 19 to 20 worked without a hiccup.

    For Hyper-V set ups, I used https://github.com/Microsoft/linux-vm-tools to enable enhanced integration (meaning I get PopOS copy/paste between it and the host, full resolution over faster virtual socket when using xrdp).

    Recent Hyper-V allows vm-within-vm, so I can actually run Android emulator on Pop OS that’s hosted within Hyper-V.

    1. 4

      very nice to see Gemini protocol built right into SerenityOS web browser.

      Totally ‘out of the blue question’… support for mobile/tablets UIs, CPU ISAs – is that something that the author(s) are thinking about?

      1. 4

        “… As the commandline tool is no longer a driving force for the development of the protocol, it is becoming increasingly clear that more fine-grained control and modularity is needed. Thus, the core maintainers of the protocol have decided to retire the ‘dat’ protocol and rename it to reflect it’s underlying component called ‘hypercore’. …”

        Is there a more detailed list of dat protocol shortcomings in ‘fine-grained control and modularity’, and how they will be addressed? Or were the shortcomings in the cli tool itself?

        lobste.rs has had a number of interesting articles/submissions that touched on IPFS and DAT, I am still slowly learning and figuring out which of those will ‘take off’ so to speak. Especially in ecosystems encompassing ‘GeoCities-like’ blog, discussion forums, and surrounding services, but with strong privacy needs, slow/unreliable internet connection and low compute resources.

        1. 4

          The Beaker Browser seems like a very interesting development in the area of “Geocities-like”, with its builtin website editor and insta-sharing. Paired with e.g. a 100MB free tier for “pinning” via hashbase.io IIUC, or alternatively self-hosted seeding e.g. on a RPi I hope. Apparently they released an official beta of Beaker with some major improvements the same day they announced the Dat->Hyper restructuring. With those steps I now have kinda feeling like they outcompeted IPFS on user-friendliness front ATM. Super cool to watch such an awesome race! :)

          1. 1

            Yes it is interesting to see.

            At least in my readidng, it seems that IPFS sees browsing for information as ‘last mile’ as in, I guess cable/telephony companies see connecting to ‘residence’ as ‘last mile’. With that analogy, IPFS sees their HTTP gateways as a necessary component that would allow existing web browsers (with specific extentions) to plug-in into the IPFS ecosystem. [1]

            While Hypercore’s Beaker appears to be a ‘File Manager’ with built-in Document Editor, Address Book, Disk Manager, web site builder (Profile Drives) [2]

            What I do not understand though, if I somebody has say a hybrid citizen of the ecosystem: an SPA webapp (single page java-script app that needs access to person’s address book, and some remote none-p2p database) – how that would work?

            [1] https://blog.ipfs.io/2019-10-08-ipfs-browsers-update/ [2] https://docs.beakerbrowser.com/joining-the-social-network

          2. 2

            I believe they are shortcomings with the CLI itself.

            Dat and the CLI were initially informed by academic use: versioning & sharing datasets in academic settings (think sharing genome sequences or astronomy data between universities). But it became clear that the underlying protocols could be used for lots of different applications outside of academia. Personally I’m most excited about kafka-like databases built on hypercore such as kappa. Over the past few years the protocol has seen a lot of iteration, and the CLI was maintained by a different group than the protocol devs and didn’t really keep up.

            1. 1

              Thx for the background.

              With regards to kappa, it seems that a query requires to pre-create materialized views (I am reading [1] )

              It means that ad-hoc queries cannot be done efficiently (as it requires essentially creating a subset of the database data that the query needs).

              Seems like a reasonable compromise in many cases for the real-world. But, perhaps not suited, for the ones were a developer does not anticipate ahead of time the specific queries.

              [1] https://github.com/kappa-db/kappa-view-query

          1. 1

            wanted to add link to the Halide page


            1. 1

              certainly this makes distribution simpler.

              But this is not same as ‘jar’ where I can build one JAR and have it run on NetBSD, Linux, and Windows. I am not that familiar with .NET, is there such a thing?

              1. 1

                currently it is integrated with XCode (Mac) Ide. That means (guessing here!) that the generated source would have be checked in into source control, to, later on, be used in Android build

                Would be nice, if .swift files were translated at the time of the Android build.

                However, underneath, it seems to require swift runtime (and may be xcode/mac specific stuff, not clear) – that would prevent ‘Android build time’ translation (unless one is building Android app on Mac as well).

                1. 6

                  this seems to be a so called ‘fluent style’ api that sets initial ‘configuration’ parameters. What makes it ‘declarative’?

                  1. 3

                    I see declarative as a spectrum. Fluent style is probably still close to imperative but is a step in the direction of declarative. To me, this library looks like functional reactive programming but domain specific. It’s not uncommon for people to view functional reactive as being closer to declarative. What puts it in that category is that some control flow is abstracted away and it seems like the functions are referentially transparent.

                    1. 4

                      I see declarative as a spectrum

                      You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.

                      It is definitely a spectrum, but a multi-dimensional one. :)

                    2. 1

                      I too find the post a bit strange and unorganized. The following passage is repeated twice for some reason:

                      As hinted above, since our specification of the animation was entirely declarative, it can’t really “do anything else” like manipulate the DOM. This gives us fantastic debugging and editing capabilities. As it’s “just” a mathematical function:

                      2: anim_circle: (t:Time) -> (cx: float, cr: float)
                    1. 1

                      Tried to install this release on Hyper-V, multi-CPU. Still does not work on that HV. I know it should work on KVM or virtual box, but I have not build a host with those yet.

                      I know that OmniOS runs as a guest with multiple CPUs enabled on Hyper-V. What I do not understand is why onmios works as a guest on hyper-V, but openindiana does not. Is there a technical reason, or a distro choice?

                      1. 4

                        Link is broken for me. Working version of the link. I don’t know how to suggest an edit - is a mod able to edit the link?

                        1. 1

                          thank you. Yes, this is my fault. I pasted same URL twice. Now I cannot edit the URL in the submission, so would appreciate if mods could correct


                          1. 1

                            @pushcx , @irene, @alynpost can some one fix the link! Cheers

                            1. 2

                              I’ve removed the accidental duplicate paste of the story URL. Thank you for bringing in to our attention and thank you @NotQuiteAnon for reporting the issue.

                        1. 11

                          I think I’ve read this post before…

                          1. 1

                            Yep, we all did. And the ending so good:

                            “… If you’re reading this and are interested in $HYPED_TECHNOLOGY like we are, we are hiring! Be sure to check out our jobs page, where there will be zero positions related to $FLASHY_LANGUAGE …”

                          1. 1

                            The link redirects to the CS dept home page. The article is here in the ACM digital library, which currently doesn’t have any subscriber restrictions or paywall.

                            I find it interesting that a paper from 2003 listing canonical reading in computer science has nothing later than 1981. Were there no subsequent paradigm shifts or big advances in the field? I would’ve thought that human-computer interaction, parallel algorithms, and cybersecurity (if that’s within scope) all moved on dramatically in the intervening time.

                            1. 2

                              Unfortunately, the author of this paper passed away in 2019.

                              I would hope this paper continuous to influence computer science programs in various education venues.

                              I think, to a degree, the intent of the author was to separate ‘application’ of CS vs ‘foundations’ of CS.

                              Which is why there is a focus on fundamental ways to carry out a ‘compute’.

                              Have there been advances in the foundations of CS since ’81? I am sure.

                              But I could not come up with something that changed a fundamental carrying out of compute (with exception of quantum computing).

                              I think there are multiple new subfields developed, though, with most foundational one being program verification (that extends to verifiable correctness, security, computability limits). I think this subfield will have tremendous long lasting impact for next 100 years. As we will transition of how we teach CS from ‘guessing’ how to build ‘what will work’, to proving that it will work.

                              1. 2

                                The link redirects to the CS dept home page. The article is here in the ACM digital library, which currently doesn’t have any subscriber restrictions or paywall.

                                Thank you! I made an opsie again.

                                I find it interesting that a paper from 2003 listing canonical reading in computer science has nothing later than 1981.

                                They actually address this in the paper, mistery solved!

                                Another aspect of the readings shown is their age: for the most part, we have hewed to a rule that Canon papers should be at least 20 years old to be included in the course. The purpose of this rather arbitrary cut-off point is to provide a rough means of ensuring the lasting interest of the work in question.

                                1. 1

                                  Oh, thanks, I didn’t spot that paragraph in the article when I read it. It’d be interesting to consider what writing from 1981-2000 would be included in a modern update of that article using the same rules, but also what the likely candidates are from this millennium too.

                              1. 4

                                great idea. I do not quite understand though, where the meetings take place. I saw an IRC node reference: https://github.com/zx9w/read-together/issues/1

                                but not clear if the IRC node is the discussion platform, what topic, how folks in different time zones participate, and so on.

                                1. 3

                                  Thank you for engaging. The idea is to use the issues to synchronize on what we want to accomplish with the meeting and then it’s a free-for-all tool-wise.

                                  The IRC channel was started from the NixOS discourse where pie_ (made issue #2) suggested reading Build Systems a la Carte. Right now that is the place to talk to us who have started reading that paper, but I figured the issues would be sufficient as a communication platform for forming new such groups.

                                1. 4

                                  Due to increased focus on automation of host management, and the prominent role that shell scripting plays in it – we are seeing more and more innovation in that area (and criticism of current shell scripting as inadequate, is growing).

                                  There are seem to be at least 4 approaches to the above:

                                  • assume that bash is a ‘low-level’ interpreter, that runs ‘everywhere’. Build an existing programming language on top of that interpreter. Example: https://github.com/chr15m/flk. (perhaps extends this to other shell script languages to hide the difference from the language user).

                                  • use existing programming language syntax construct (may be even leveraging a parser-combinator for a particular language), to wrap shell script invocations. I think Janet’s approach noted in the article, is of this sort. And all the other examples as listed by seschwar: https://lobste.rs/s/p6insb/dsl_for_shell_scripting#c_2yudqt

                                  • Use some form of declarative language that deals with: versioned configuration model, declarative execution model, and some form of conditional processing. Example of this Dhall ( https://dhall-lang.org/ ), and to a much lesser degree, Ansible.

                                  • Rewrite all of the underlying commands to be ‘chaining compatible’ through a uniformly modeled result returns (eg PowerShell)

                                  Not sure if there are more categories, but I think bash is not going to be as prominent as it was last 20-30 years.

                                  It would be interesting to see if any of the major Linux distros and BSDs adapt a uniform approach to minimizing prevalence of bash,sh, etc for host management automation. If they do, perhaps we will see a replacement emerging sooner rather than later.

                                  1. 1

                                    “Golden Gate” is the name of their 2nd gen stack.

                                    When I was reading the name, it remind me, immediately of something from a different domain, that I am very familiar with:


                                    1. 1

                                      This brings back (bad) memories

                                    1. 2

                                      I am happy with Javalin https://javalin.io/ [1] Modern Java (or Kotlin), relies on Jetty.

                                      It is not easy to define ‘criteria’ by which to choose something. So will share what we had:

                                      • Overall the criteria were to reduce different programming languages to a minimum’ between fronts and backend.

                                      • Leverage async io.

                                      • Minimize the number of ‘tools’ to deploy in production (eg web servers, cache proxies, etc) by using a framework that relies on a competent web server.

                                      • Additionally, our whole backend should be releasable and deployable as a single file (eg Jar), running different services on different port. Optionally, our installers could run same jar on separate JVMs/hosts by enabling/desabling features in single config file.
                                        Basically one jar, one config file, on database server – is all that one needs to run a fully functional backend.

                                      more detailed criteria that were of interest were:

                                      • must be able to selectively support async io
                                      • must be able to leverage Java 8+ jvm
                                      • I do not like annotations, so no spring-boot or such
                                      • must rely on proven web server so that I can, if I want to, use it as a web server facing public internet (so must support full spec http2)
                                      • do not want ‘big’ framework with nuts and bolts. do not want ‘database wrappers/ORMs’, etc
                                      • want light, yet flexible support for templating language of my choice (eg Pebble [2] )
                                      • need to be java or Kotlin so that our ‘internal SDK’ can be shared APIs with Android apps (the SDK mostly is a collection of classes describing rest APIs, as well as a bunch of utility functions )

                                      In my case corporate compliance with enterprise standards or other similar factors were not in play, but that may be in play for you.

                                      [1] https://javalin.io/ [2] https://pebbletemplates.io/

                                      1. 4

                                        for folks, like myself who did not know what ‘Dolt’ is

                                        “.. Dolt is a relational database, i.e. it has tables, and you can execute SQL queries against those tables. It also has version control primitives that operate at the level of table cell. Thus Dolt is a database that supports fine grained value-wise version control, where all changes to data and schema are stored in commit log. “ [1]

                                        [1] https://github.com/liquidata-inc/dolt

                                        1. 7

                                          Dolt is git for data. It has the same command line as Git, but it versions tables instead of files. And it adds a couple other commands for the database part, like sql for running a SQL shell against your tables, or sql-server for starting up a MySQL compatible server to connect to.

                                          1. 2


                                            n. A stupid person; a dunce.
                                            To waste time foolishly; behave foolishly.
                                            n. A dull, stupid fellow; a blockhead; a numskull.

                                            Unfortunate name.

                                            1. 4

                                              It’s an homage to how Linus named git:


                                          1. 1

                                            Thank you for sharing. I overall, agree with your summary:

                                            “ Complexity has to live somewhere. If you embrace it, give it the place it deserves, design your system and organisation knowing it exists, and focus on adapting, it might just become a strength. “

                                            However, I am not sure I agree with how you define accidental complexity

                                            Wondering if I could test your assertions by applying it in reverse.

                                            WRT: >” Accidental complexity is just essential complexity that shows its age. It cannot be avoided, and it keeps changing. “

                                            If essential complexity is young, does it mean we will not have ‘accidental’ complexity?

                                            I am thinking we still could have accidental complexity (as an example from information change that came late). And if I am right, then the definition above would be wrong..

                                            I also, somewhat disagree, that complexity always needs to be accepted, absorbed in and managed.

                                            WRT: *> “ It’s something I keep seeing debated at all levels: just how much commenting should go on in functions and methods? What’s the ideal amount of abstraction? When does a framework start having “too much magic”? When are there too many languages in an organisation?

                                            We try to get rid of the complexity, control it, and seek simplicity. I think framing things that way is misguided. Complexity has to live somewhere. “*

                                            I would argue that, say, intentionally (or by negligence) creating chaos will result in complexity. But that complexity should not be accepted. And, instead, the reasons for intentional, or by-negligence chaos should be addressed, and chaos should be removed (or reduced).

                                            In other words, why not create defenses against intentional, or by-negligence chaos?

                                            There is a definition of complexity that I have heard (but cannot find a link now to that youtube lecture, it was in cosmology).


                                            complexity is a change in entropy between 2 ordered states of the system in time

                                            Then, I would also leverage another view that, especially in human-created systems:

                                            “ We introduce this measure and argue that increasing information is equivalent to increasing complexity, “ [2]

                                            Combining both (and i am doing freely at the moment, without mathematics) would result in a following definition:

                                            ** Increase of information, would result in change (increase) of entropy, and that would result in complexity. **

                                            Then I could re-phrase your blog such that:

                                            If complexity is introduced by absorbing more information into the system, then the complexity must be managed. and the best way to manage it, is to embrace it and to find well defined places were the complexity is managed (rather than sprinkling it all over the system)

                                            The above re-phrasing would avoid, what in my view, are less ‘obvious’ assertions in the post

                                            • that any complexity deserves to be managed (and again, in my view, it is only the one that was introduced by absorbing more information)
                                            • that accidental complexity is an ‘aged essential complexity’

                                            Plus the above rephrasing also is reversible. Meaning if I remove information from the system, I should be able to reduce complexity.

                                            [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4179993/

                                            1. 1

                                              is Ogre complementary, orthogonal or competing to something like Godot ?

                                              1. 2

                                                Not sure if project leaders read this thread.

                                                If it helps any, I would like to suggest to have template projects that represent non-trivial mono-repos for domain specific examples as:

                                                • mobileapp+webapp+desktopapp+distributed-backend
                                                • multi-controller-embedded-solution-with-cross-compilation (in several languages)
                                                • multi-os-device-driver project
                                                • a godot game project for multiple devices
                                                • a migration model from multi-repo to a mono-repo

                                                I think setting up project repositories and manage lifecycle of a project (with multiple contributors, pre-releases, releases, dependencies on specific r oss projects, branches, etc) – is becoming a topic on its own.

                                                And I think, these kinds of templates that demonstrate the power of a source management platform, as applied to specific needs, would be of benefit

                                                1. 1

                                                  Perhaps if author summarized his position as:

                                                  • if you do not need a mobile app
                                                  • if you do not need a complex interaction (eg AV editor)

                                                  then use server side rendering. I think this would be a more agreeable argument (although with still lots of caveats)

                                                  Given that ‘Thiel truth’ is defined as “What important truth do very few people agree with you on?”

                                                  I am wondering if the below assertion sort of ‘shifts the goal posts’ on what we ‘truth’ we are examining.

                                                  ” People don’t want to install your app Many important, profitable, applications aren’t used enough to make a native mobile app worthwhile. Most online shops, banking, festival ticketing, government forms, etc. Therefore you will not have to support both server-side rendering and an API for your native apps “

                                                  Meaning that, the author is switching the assertion to be examined from from:

                                                  server side rendering is a way to go’ to another assertion:

                                                  ‘Many important, profitable, applications aren’t used enough to make a native mobile app worthwhile.’