1.  

    This is amazing news. I write business simulations for a company that uses them for training and the original article about Maxis Business Simulations was and is a fascinating read. I look forward to seeing a more in depth write up of SimRefinery now it has been recovered and archived for all to see.

    1. 9

      GUI are superior sometimes. One aspect - better feature discovery and less learning how changing one thing affects the others.

      1. 9

        UI discoverability is orthogonal to {G,C}UI IME. I’ve seen great documentation come out of --help and man pages, and I’ve seen baroque and opaque GUIs.

        1. 16

          UI discoverability is orthogonal to {G,C}UI IME.

          I don’t think this is true. The default for GUI is a bunch of buttons you can click to see what happens. The default for CLI is a blank screen. Using a manual (–help) as a counter example isn’t arguing for CLI, but arguing for good manuals.

          When you are in a french bakery, pointing at a croissant is using a GUI, and trying to say things randomly hoping to make sense is using a CLI. Clearly one is superior to another, both in immediate usability, and discoverability.

          1. 4

            The default for GUI is a bunch of buttons you can click to see what happens.

            Not necessarily for three reasons:

            1. Menus. GUIs tend to have menus, and menus are not discoverable unless you and the developers spontaneously develop the same ontology and, therefore, classification scheme for the program’s functionality. Conventions help, but conventions apply to CLI programs, too.
            2. Buttons need to be labeled somehow, and iconic labels tend to verge on the pictographic. Even textual labels can be hard to decipher, but the trend is towards images, which are smaller and (supposedly) don’t need to be translated for different audiences.
            3. The increase of mobile UIs as being the new default for GUIs means fewer buttons and more gestures, and gestures aren’t signposted. Can I pinch something here? Do I need to swipe? Is this operable now? Do I need to long-touch it or caress it in some other fashion? It might be good for you, but it’s remarkably inscrutable as opposed to discoverable.
            1. 12

              and menus are not discoverable

              I disagree here. Going through the menus to see what options you have is a pretty common thing to do.

              The increase of mobile UIs as being the new default for GUIs

              But the mobiles GUIs are not discoverable the same way CLIs are not discoverable. There are no buttons and you are supposed to know what to do (the right incantation or the right gesture). But even then swipe and pinch are more intuitive than shlib -m -t 500

              1. 2

                I disagree here. Going through the menus to see what options you have is a pretty common thing to do.

                Every user I have had an interaction with never looks through the menus; I can for certain tell you nine out of every ten people I work with are unaware the menu bar exists for the majority of executables they use and if they are aware of its existence its only to click something they have been shown how to use; they will rarely venture far from that well trodden path and its frustrating as a little bit of curiosity would have solved their question before they asked it.

              2. 2

                #3 is the big one for me. I have no idea how to use most mobile apps. Whereas with a well-designed CLI I can mash TAB and get completion at its repl or through its shell completion support, and work out what the thing can do.

                1. 4

                  Leave out “well-designed” and you’re back in pinch-bash-swipe land on your CLI as well.

              3. 2

                A GUI is immediately useable in that you can see buttons and other widgets, and with the mouse you can point and click. However for GUI’s, as they are mostly implemented in todays world, the advantage stops there. There is no next step in that you can improve your mastery, possibly except for your mastery of mouse precision and keyboard shortcuts. However in CLIs, as they are today, there is a steep curve to getting started, however because of certain properties of the CLI, as the article mentions, when you have interned those key principles, the sky is the limit.

                GUIs are excellent for basic usage, but doesn’t allow any kind of mastery. CLIs are tough to get started with, but allows all kinds of mastery. That’s a key distinction. However I don’t think those properties are inherent to “GUIs” and “CLIs”, it’s just inherent in the way they have been implemented in our current software landscape.

          1. 2

            I think the core issue that academic code has is that those writing it aren’t programmers first and are often over worked grad students learning as they go.

            1. 9

              Your experience differs from mine. We implemented email based login as primary authentication as a large proportion of our helpdesk tickets were from people having trouble logging in due to their forgetting their password and even with a password reset function available to them, they wouldn’t use it unless prompted (I guess because they couldn’t possibly have forgotten their password and the system must therefore be broken.)

              I digress, we use mailgun for ensuring email deliverability and have so far not had issues in that regard, on a good day the 95th percentile of emails will be delivered within 30 seconds of the request being made.

              Our product is provided to clients as a B2B tool, so user on-boarding is different than usual in that the client administrator will add users and the system then email those users an on-boarding email.

              Magic link authentication wont be for everyone and may not work for certain user groups, but I like it.

              1. 4

                A lot of these are nifty implementations (I prefer exa to ls for example.) However, unless they are one-to-one compatible they shouldn’t be presented as replacements before considering any caveats.

                1. 1

                  Having worked with it for the past eight years I know Laravel inside-out and so would largely work with that as I can flow with the framework and utilise its tooling to accelerate an MVPs time to market.

                  I have used Express.Js in node and the standard library in Go (because its good enough to not need a framework on-top) but am not as quick to develop something usable as I am in Laravel.

                  If you were to choose a backend web framework today, what would you choose?

                  For me personally it would be Laravel, unless I was wanting to learn something new. Laravel is built on a solid foundation, constantly updated and with a large community from which to source help.

                  However I am a PHP developer and Laravel operates on a level I fully understand. Therefore the actual answer to your question is: “A framework that is battle tested, with regular, continual development, that has an active community and is written in a programming language you understand.” A framework is a tool kit, it should be a catalyst for developing good code quickly; boring is good.

                  1. 2

                    I could spent hours browsing through content like this, very nice!

                    1. 8

                      Then you would enjoy the jargon file

                    1. 1

                      Attempting to learn about WebRTC and searching for a way of creating a bespoke interface for members of teams to be able to partake in remote events.

                      1. 2

                        I shared a project idea I had on the fediverse and almost 100 people showing interest, so spurred on from that I have begun building it.

                        1. 2

                          What kind of idea ? I’ve been working on some aggregators.

                          https://mastodonia.club

                          https://pixelfed.club

                          Let me know if you want to talk

                          1. 1

                            A digital floppy disk box, a floppybox. Originally I had the idea as a parody of drop box that provided cloud storage in 1.44 MB chunks. I bought the domain on a whim over a year ago and the idea had collected dust ever since.

                            I mooted the idea of dusting it off and making it into some kind of art project, with each “disk image” being statically served via a custom subdomain (which could be CNAME linked to a users domain if they so wish) to see what creative things people can fit into 1.44 MB a bit like the demoscene but with digital art/web based projects.

                            People will be able to upload either archives (zip,tar, etc) containing no more than 1.44 MB in uncompressed content or a disk image of the same size restriction. Once uploaded, before publishing (if they choose to do so publicly or not) they can toggle the image as statically served content or to be displayed in something similar to js-dos or [pce-js])https://jamesfriend.com.au/pce-js/ibmpc-games/)

                            Those aggregators look nifty, I operate a pixelfed server: federated.photos, but need to do a bit of maintenance on it as I think it’s a fair few versions behind current stable.

                            They actually remind me of another project I have on the back burner, aggregating polls on the fediverse and displaying results as a grid of pie charts.

                        1. 2

                          I have spent a lot of time watching The Coding Train, its a YouTube channel where created by Associate Arts Professor Daniel Shiffman. There he teaches a lot about generative art using either Processing or p5.js.

                          1. 2

                            Oh nice thanks for sharing I’ll definitely check him out

                          1. 1

                            I have never managed more than a few handful of servers and usually kept with a simple naming of (db|app|cache)-[0-9]+

                            1. 1

                              I have a soft spot for WordPress; that being said I no longer use it for my personal websites opting instead for using a git repository with hooks on dev, staging and master branches that spin up deployment to one of three environments. I am able to edit and deploy through either an IDE or web interface although neither currently have the comforts and easy to use tooling that something like WordPress affords.

                              1. 3

                                I very nearly missed the satire tag on this.

                                1. 3

                                  Would be nice if the README.md explained how to get it running. I’m a little familiar with Makefile, but I’m having some trouble here.

                                  I’d love to run this in a tmux while wfh this week :)

                                  1. 4

                                    You will need basic development tools like make and a c++ compiler as well as git to download the sources and ncurses development libraries. Here’s debian like systems:

                                    sudo apt install git libncurses-dev build-essential
                                    git clone https://github.com/cbabuska/curses_city
                                    cd curses_city
                                    make
                                    ./Curses_City
                                    

                                    (for Termux on your phone, replace libncurses-dev with ncurses and it works perfectly!)

                                    1. 3

                                      Thank you for sharing this.

                                      1. 2

                                        Thank you for the dependencies. Got it running.

                                    1. 7

                                      While researching games that utilise an ASCII UI I found this interesting example written in C++. I thought it interesting enough to share here.

                                      1. 10

                                        Thank you, this is great. Any chance you’d publish the whole list when you’ve finished your research?

                                      1. 1

                                        I always thought the short 1U servers were for shallower racks, like the network boxes you often see on walls.

                                        1. 3

                                          The normal Dell rails for their short 1U servers are still full length (although perhaps you can get special shorter rails if you ask), so I don’t think they’d fit in short racks. Also, I believe that short telco racks are often only front-mount (with no back set of posts), which doesn’t work for server rails. My best guess about why Dell appears to like shorter 1U servers is ‘less metal for the case’, which presumably translates to cost savings.

                                          (I’m the author of the linked to entry.)

                                        1. 2

                                          This is really very nice to see; ZZT was one of the first games/game engines I played with in the mid-nineties. Worlds of ZZT on twitter and The Museum of ZZT are two of my favourite online locations when I feel like scrolling through and exploring a hidden world or two.

                                          1. 2

                                            Swing by sometime, the Museum’s associated Discord (yeah, I know) is currently exploding with all kinds of patches being bandied around, and there have been some great releases in the past couple of years.

                                            Here’s an article reviewing the 2010s of ZZT.

                                          1. 6

                                            Unable to read due to being on Medium :(

                                              1. 1

                                                Did you try from incognito?

                                              1. 2

                                                The fact that my node_modules folder is > 100MB in size and my rendered assets are less than 80KB per page seems to render this rant moot.

                                                1. 7

                                                  Your particular circumstance is not representative of the wider community and ecosystem.

                                                  1. 2

                                                    My point being that with adequate tree shaking the framework used shouldn’t matter as the “compiled” assets should only be big enough to contain functionality that is used. The problem I often see is websites loading MB of assets to only use a tiny percentage of it overall which to be fair to the author of this piece seems to largely be the point they are getting at.

                                                    In reflection calling it a rant feels dismissive, that was not my intent; its a well written article on an important subject that more of us should pay attention to.

                                                    1. 6

                                                      I think this depends a little on defaults and how much effort it is to achieve this. If 90% of the people are “using it wrong” as you say, I’d still say it’s the framework’s fault for having bad defaults or bad docs.

                                                      1. 5

                                                        The point is dev fun and user performance is not opposed. Let’s improve tree shaking, so that devs can continue to use fun frameworks and users can get fast web performance.

                                                        1. 2

                                                          Although in this case people who use/like medium may make the point that it’s also optimized for author fun in writing and publishing, to the detriment of the readers ;)

                                                  2. 5

                                                    I don’t particularly enjoy having to pull down 100MB of failes in order to generate 80kb, especially when that infrastructure results in loss of time, generation of waste, and damage to the climate.

                                                    Additionally, it’s remarkably hard to secure such a situation.

                                                    1. 1

                                                      I feel totally uncomfortable with it as well, it’s incredibly wasteful, but people don’t give it a second thought, brushing it off with ‘but hard drive sizes these days’. I just want to add a testing framework (though the same applies to many popular libraries/frameworks) to my project and I end up pulling in 300 dependencies weighing in the hundreds of megabytes (how???). There’s something very wrong with this ecosystem. I guess it is not so far removed from our attitudes towards disposable, single-use plastics and other forms of waste.

                                                    2. 3

                                                      If that 80k does computationally slow things, then the end result is still going to be, well, slow.

                                                      1. 2

                                                        From GitLab:

                                                        $ du -hs node_modules
                                                        689M
                                                        $ du -hs public/assets
                                                        109M
                                                        

                                                        Loading https://gitlab.com/gitlab-org/gitlab results in 1.51 MB of assets being downloaded. Without caches that becomes 4.44 MB.

                                                        Mind you that this includes images, but the point is the same: the output size will vary greatly per project, but there definitely is a trend of websites becoming more heavy every year.

                                                      1. 15

                                                        I kinda dream of a blogging and/or bookmarking engine that would scrape the target page at the time of linking and archive it locally (ideally in WARC format, though a “readability-like” text extract would be a good first attempt too); then, occasionally, re-crawl the links, warning me about 404s and other potential changes; I could then click the suspicious ones to check them manually, and for those I tick off as confirmed bitrot, the engine would then serve the local archived copies to the readers. Even more ideally, all of this stuff could then be stored in IPFS. Nad yes, I know of pinboard.in, and am using it, but would still prefer a self-hosted (static) blog-like solution, ideally with IPFS support. It’s on my super long TODO list, but too many projects already started so I don’t think I’ll get to it in this life, and additionally it has quite a few nontrivial pieces to it I think.

                                                        edit: Even more ideally, the IPFS copies of the websites could then be easily cloned by the blog readers, forming a self-interest-driven worldwide replicated Web Archive network.

                                                        1. 6

                                                          and for those I tick off as confirmed bitrot, the engine would then serve the local archived copies to the readers

                                                          I think this would violate copyright law in Europe. Not sure about the US, though. archive.org somehow does not seem to have problems.

                                                          In Germany archiving written material (also the web) is the job of the National Library. But to my knowledge they only archive all books and a small portion of web pages. And even they say: “Due to copyright reasons access to the collected websites is usually only possible from our reading halls in Leipzig and Frankfurt am Main”.

                                                          1. 2

                                                            Uhhhhhh. Sadly, a good point. One would have to talk to r/datahoarders or The Archive Team and ask what they think about it, would they have some ideas how to do this legally. Still, I’d certainly want to have those archived copies available to myself for sure. This cannot be illegal, I can already do “File / Save as” in my browser.

                                                          2. 1

                                                            Many years ago when I attended university (in early 2000s) quoting from internet sources wasn’t accepted unless the quoted source was included as an appendix item along with the essay.

                                                            Since then if I find a digital source for referencing I create a personal archive of it including all relevant meta data for referencing purposes. This has helped combat the effect of “digital decay” on my work where the internet archive may not have managed to grab a snapshot.

                                                            1. 1

                                                              warning me about 404s and other potential changes; I could then click the suspicious ones to check them manually

                                                              This was one of the few uses I had for deep learning. Sometimes, I’d get 404’s on part of a page but not all. They might also do weird stuff, like GIF’s, in place of the 404. Local copies with smart, 404 detection would be a big help to counter the death of the Old Web.