1. 2

    Someone should make an equivalent that allows communicating with axolotls through your screen, Cortazar style.

    1. 2

      We are working with the community to make the game and platform (very close to a Dreamcast) run in a emulator. The touch screen functionalities will probably be the most challenging.

      1. 1

        That would be pretty awesome and a great way to preserve that history. You might want to consider contacting The Living Computer Museum in Seattle to see if they’d be willing to put up am exhibit around simulated version.

        1. 2

          Indeed. Actually, we will try to have it in our own exhibition. But obviously, our preservation work can be reuse by other institutions.

        2. 1

          Is it on a Naomi board?

          1. 1

            It is closer to a regular Dreamcast I would say.

            1. 1

              Seriously though, I understand its importance for computer history, but I’m not sure if I understand who would buy it at the time. Is it a very advanced platform for this day made into a single-purpose computer? What’s unique about it other than the early use of touchscreen? Anyone knows what the list price was?

              1. 1

                I searched for “Sega Fish Life” on duckduckgo and found Fish Life at Sega Retro which says ¥498000.

                1. 1

                  Software was sold at 19800¥. The platform was, as indicated on SEGA website, targeting public places: “Perfect for use in the following locations:

                  • Restaurant Lounges
                  • Aquariums
                  • Hotel and Bank Lobbies, etc.
                  • Libraries
                  • Hospital Waiting Rooms
                  • Halls and Event locations”
                  1. 1

                    ¥498000 sounds like an obscene price that could hardly justify the purchase even for businesses. The software sounds affordable though, and if it was meant to be a platform for interactive displays, I definitely can see the appeal.

      1. 5

        Nice idea.

        Do the papers have to be new, or can old papers be discussed?

        1. 1

          I think each paper has to have a link in the Paperkast. Old, new doesn’t matter.

          1. 1

            This is also a question I had, along with what expectations there are on commenters. I’m a “casual” reader of the primary literature in that I’m not a researcher myself so have different expectations and social norms than someone who’s in the community, do I engage with the conversations at paperkast, or is it for academics?

            1. 1

              This is for everybody. However, If the community consists of academics, grad students there will be technical discussions.

          1. 10

            Kubernetes has the ability to run jobs on a cron schedule, and you can launch one off run to completion pods as tasks.

            1. 2

              This is what we do too.

            1. 2

              I like the idea of IRC wedding. That’s how I would have wanted to conduct my own wedding if I could possibly convince anyone to marry me. ;)

              1. 4

                only these days it would be on Slack and when all the photos are posted, the “I do” messages would scroll out of the history.

                1. 3

                  Slack can be problematic for weddings, but seems to be a perfect medium for divorce.

                1. 3

                  Well, so one of my Berlin Rust Hack & Learn regulars is porting rustc to Gnu Hurd. I can switch soon, year of the desktop is 2109.

                  1. 2

                    The fact that I can’t tell if this is a joke or a typo makes it a better joke.

                    1. 2

                      Both. I made the typo and decided to’s too good to be fixed.

                  2. 3

                    If I remember correctly Haiku also has microkernel.

                    1. 4

                      I thought that BeOS was microkernel based on what so many said. waddlespash of Haiku countered me saying it wasn’t. That discussion is here.

                      1. 1

                        Haiku has a hybrid kernel, like Mac OS X or Windows NT.

                      2. 2

                        QNX, Minix 3, or Genode get you more mileage. At least two have desktop environments, too. I’m not sure about Minix 3 but did find this picture.

                        1. 1

                          Don’t MacOS and iOS both use variants of the Mach microkernel?

                          1. 4

                            They’re what’s called hybrid kernels. They have too much running in kernel space to really qualify as microkernel. Using Mach was probably a mistake. It’s the microkernel whose inefficient design created the misconceptions we’ve been countering for a long time. Plus, if you have that much in the kernel, might as well just use a well-organized, monolothic design.

                            That’s what I thought a long time. CompSci work on both hardware and software has created many new methods that might have implications for hybrid designs. Micro vs something in between vs monolithic is worth rethinking hard these days.

                            1. 5

                              That narrative makes it sound like they took Mach and added BSD back in until it was ready, when the evolution of Mach was that it started as an object-oriented kernel with an in-kernel BSD personality and that was the kernel NeXT took, along with CMU developer and Mach lead Avie Tevanien.

                              That was Mach 2.5. Mach 3.0 was the first microkernel version of Mach, and that’s the one GNU Mach is based on. Some code changes were backported to the XNU and OSFMK kernels from Mach 3.0, but they were always designed and implemented as full BSD kernels with object-oriented IPC, virtual memory management and multithreading.

                              1. 2

                                Yeah, I didn’t study the development of Mach. Thanks for filling in those details. That they tried to trim a bigger OS into a microkernel makes its failure even more likely.

                                1. 1

                                  I don’t follow the reasoning; what failed? They didn’t fail to make a microkernel BSD, as Mach 3 is that. They didn’t fail to get adoption, and indeed it’s easier when you’re compatible with an existing system.

                                  1. 1

                                    They failed in many ways:

                                    1. Little adoption. XNU is not Mach but incorporates it. Whereas Windows, Linux, and BSD kernels are used directly by large, install bases.

                                    2. So slow as a microkernel that people wanting microkernels went with other designs.

                                    3. Less reliable than some alternatives under fault conditions.

                                    4. Less maintainable, such as easy swaps of modules, than L4 and KeyKOS-based systems.

                                    5. Due to its complexity, every attempt to secure it failed. Reading about Trusted Mach, DTMach, DTOS, etc is when I first saw it. All they did was talk trash about the problems they had analyzing and verifying it vs other systems of the time like STOP, GEMSOS and LOCK.

                                    So, it was objectively worse than competing designs then and later in many attributes. It was too complex, too slow, and not as reliable as competitors like QNX. It couldn’t be secured to high assurance either ever or for a long time. So, it was a failure compared to them. It was a success if the goal was to generate research papers/funding, give people ideas, and make code someone might randomly mix with other code to create a commercial product.

                                    All depends on viewpoint of or requirements for OS you’re selecting. It failed mine. Microkernels + isolated applications + user-mode Linux are currently best fit for my combined requirements. OKL4, INTEGRITY-178B, LynxSecure, and GenodeOS are examples implementing that model.

                            2. 3

                              Yes, but with most of a BSD kernel stuck on and running in the same address space. https://en.wikipedia.org/wiki/XNU

                          1. 2

                            I’m on a holiday with lots of driving so I’ve stocked up on audiobooks. Currently reading Ho Chi Minh Down With Colonialism!, up next is Adam Fisher Valley of Genius.

                            Print: I just ordered Martin Empson ‘Kill all the Gentlemen’ Class struggle and change in the English countryside from Bookmarks, the socialist bookshop in London that was trashed by fascists today.

                            1. 2

                              I’m disappointed to read the negative comments on TFA complaining that the author has “merely” identified a problem and called on us to fix it, without also implementing the solution. That is an established pattern, a revolution has four classes of actor:

                              • theoreticians
                              • propagandists
                              • agitators
                              • organisers

                              Identifying a problem is a necessary prerequisite to popularising the solution, but all four steps do not need to be partaken by the same person. RMS wrote the GNU Manifesto, but did not write all of GNU. Martin Luther wrote the ninety-five theses, but did not undertake all of protestant reform. Karl Marx and Friedrich Engels wrote the Communist Manifesto but did not lead a revolution in Russia or China. The Agile Manifesto signatories wrote the manifesto for agile software development but did not all personally tell your team lead to transform your development processes.

                              Understandably, there are people who do not react well to theory, and who need the other three activities to be completed before they can see their role in the change. My disappointment is that the noise caused by so many people saying “you have not told me what to do” drowns out the few asking themselves “what is to be done?”

                              1. 2

                                I read through and concluded that the author says nothing. Cannot exactly identify the problem he raises up. And by observing the first lines, I think he’s an idiot.

                                That computing is so complex is not a fault of nerdy young 50 years old men. If nerdy 50 years young men had designed that stuff we’d be using Plan9 with Prolog and not have as many problems as now.

                                The current computing platforms are created by multiple-body companies and committees with commercial interests. They’ve provided all the great and nice specs such as COBOL, ALGOL, HDMI, USB, UEFI, XML and ACHI, just few to start the list with. All of the bullshit is the handwriting of the ignorant, not of those playing dungeons and dragons or solving rubik cubes.

                              1. 3

                                I agree, but do not see how the genie can be put back into the bottle. Even Apple, who since forming the WHATWG have (failed at building their iAd advertising business and subsequently) decided that privacy is an important marketing differentiator, are limiting the worst excesses of browser tracking but not fundamentally disarming it.

                                1. 5

                                  Well there are several ways actually.

                                  First we need that more people really understand these issues. That’s why I wrote this.
                                  These are both huge browsers security issues and geopolitical ones.

                                  Then we need browser vendors to fix them.
                                  The DNS issue is something Governements should work out, technically and politically.
                                  The JavaScript issue is easier to fix, as it’s entirely a software issue.

                                  As a first step, browsers could mark as UNSAFE all web pages that use JavaScript, as they do with unencrypted HTTP sites these days.
                                  Then it’s just matter of going back to semantic HyperTexts, with a better markup, better CSS and better typography in the browser. I’d like to see XHTML reconsidered, but with the lesson learned.

                                  For example I would see well an <ADVERTISEMENT> tag.

                                  But the main point is to avoid any Turing complete language in the browser.

                                  1. 6

                                    The web has gotten to the point where it’s primarily used as a distribution mechanism for scripts. And, as hypertext, static html is not acceptable (having none of the normal guarantees about content stability that hypertext ought to make). So, if we’re going to drop the javascript sandbox, we might as well bite the bullet and also drop DNS and HTTP at the same time.

                                    In other words, we replace one thing (“the browser”) with two things – a sandbox that downloads and runs scripts, and a proper hypertext browser & editor. Both of these should use a non-host-oriented addressing scheme for identifying chunks of content, and support serving out of its own cache to peers. (The way I’d do it is probably to run an ipfs daemon & then use it for both fetching & publishing, and then manage pinning and unpinning content based on some inter-peer communication protocol.)

                                    I’ve made this suggestion before. (I’m pretty sure you’ve been privy to some of the discussions I’ve had about it on the fediverse.)

                                    The point I’d like to underline here is: if the only thing salvagable about the web is the use of internal markup and host-oriented addressing schemes for static semantic hypertext, then nothing about the web is salvagable.

                                    HTTP+HTML is an unacceptable hypertext system by 1989 standards, and thirty years on we should set our sights higher. Luckily, problems that were hard but not impossible in 1989 (like ensuring that data gets replicated between independent heterogenous nodes) have been made trivial in practice because of the proliferation of both high-speed always-on connections & solid, well-engineered open source packages.

                                    1. 3

                                      Honestly, among the few hypertexts I used in the past (with GNU Info being the only one whose I remember the name), the web was the best one from several point of views.

                                      I think XHTML and the related stack was pretty good, and I used XML Namespaces extensively to enrich web pages with semantic contents while preserving accessibility. But, don’t worry, I don’t dare to argue about hypertexts… with you! :-D

                                      I welcome any proposal. And any experiment. And any hack.

                                      I see these as huge security vulnerabilities in the very design of the Internet and the Web.
                                      Now we need to fix them. I hope in Mozilla as they claim they care about security and privacy, and this actually put lifes at risk. So, we need to go back to the drawing board and design a better Web on top of the lessons we learned.

                                      In other words, we replace one thing (“the browser”) with two things – a sandbox that downloads and runs scripts, and a proper hypertext browser & editor.

                                      Agreed for the hypertext browser and editor.

                                      But the fact you feel the need for a “sandbox that downloads and runs scripts” is just another symptom of the desease that made me create Jehanne. It’s basically a shortcoming of mainstream operating systems!

                                      Unfortunately, hacking HTML and HTTP to patch this proved to be the wrong approach.

                                      IMHO, we need a properly designed distributed operating system (and a better network protocol to serve such distributed computation).

                                      1. 2

                                        the fact you feel the need for a “sandbox that downloads and runs scripts” is […] basically a shortcoming of mainstream operating systems!

                                        Absolutely agreed. The web has basically become a package manager for unsafe code. Replacing that function with a dedicated sandbox is only an incremental improvement.

                                        However – if we stop using URLs that can have their content changed at any time and start using addresses that have their own validation built in, then the problem of scripts being completely swapped out at runtime to target particular people and machines. This is an incremental improvement but a very important one.

                                        we need a properly designed distributed operating system (and a better network protocol to serve such distributed computation)

                                        Likewise, completely agreed. A proper distributed OS (as opposed to an ad-hoc layer for swapping sandboxed unsafe code) could be designed data-first instead of host-first (even though almost all distribution protocols, even erlang’s, are still host-oriented). The problems that web tech tries and fails to paper over are solved now by SSB, IPFS, bittorrent, CORD, and other systems – all open source and with documented designs.

                                        1. 2

                                          SSB, IPFS, (and DAT?) all have a pretty severe drawback that’s tied to their biggest strength – immutability/non-deleteability. This gets you a lot of things that the original vision of hypertext wants (byte-range transclusion, external markup, links that never break), and I understand that’s why you feel strongly about it. But not being able to ever delete or edit anything in place (only publish updated versions as a new document) is not a humane basis for the web.

                                          If these solutions really took off today, I’m pretty sure that in 10 years, we’d be begging for the horrors of 2018’s web. It would facilitate harassment and hate speech to an extent that would make twitter.com look like kitty.town, and moderation tools would be near impossible to implement.

                                          I use and like SSB, but as it’s growing, it’s starting to show some of these problems. Due to founder effect, the community on there is pretty kind, but people are becoming aware of the weaknesses for moderation. No ability to delete or edit posts is a big one; the only fix is to define delete/patch messages that well-behaved clients will respect, but the original will always be available. And the way the protocol works, blocks (the foundation of a humane social media experience) are one-way only: if you block someone you can see them, but they can still see you. (Contrast to Mastodon and mainline Pleroma).

                                          On the other hand, for the narrower case of delivering sandboxed code, IPFS or BitTorrent are basically exactly what you want, so my complaints here may not be strictly relevant.

                                          1. 2

                                            Yeah, undeletability is a can of worms. It’s a huge legal problem, and a potentially large social problem.

                                            It’s also fundamental to the basis of functioning hypertext.

                                            SSB is mostly being used as a social network – a context where undeletability is a much bigger deal, since the expectation is that posts are being made off-the-cuff. Hypertext (excluding the web, of course) is usually thought of in terms of a publishing context – for distributing essays, criticism, deconstructions, syntheses, historical notes, anthologies, etc. (XanaduSpace even had a public/private distinction, where the full hypertext and versioning facilities were available for documents that were not made available to other users, with an eventual ‘publication’ step broadcasting a copy with previous revision history elided.) The expectations are much different, both around interest in archival by third parties, and around the expectation of privacy or deniability: hypertext is very much in the vein of, say, academic journal publishing[1].

                                            In other words: what I’m looking for with regard to new hypertext systems will look less like a deeply-intertwingled Mastodon and more like a deeply-intertwingled Medium. (This probably even comes down to transcopyright. Nobody implements transcopyright, but the closest thing on the web is probably Medium’s open paywall, not token-word’s pseudo-transcopyright system.)

                                            [1] The big projected commercial application for XanaduSpace was law offices: we would market it to paralegals, have a dedicated private permapub server for the law office pre-loaded with case law, and replace the current mechanisms they use for searching and discussing case law (which rely heavily on Microsoft Word’s “track changes” feature).

                                1. 2

                                  The non-syncing of spec code with implementation code really feels like the big barrier to making this usable in general.

                                  One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

                                  That way you could write simplifying specs for certain parts of the code (say, the result of input will be any string instead of waiting on stdin when checking), while still avoiding duplication because most code is straightforward

                                  Though to be honest this might be very hard to get right. I feel like it’s a bit like the ORM/Type System issue, where type systems are usually rigid and don’t give much “type-check-time” flexibility, but ORMs are usually defined dynamically (relative to the type system)

                                  1. 6

                                    This is why I ended up spending less time with TLA after learning it. However, learning it was an incredibly useful exercise that has dramatically informed the way I build systems. It made me start to ask why I can’t write TLA style invariants and check executions of concurrent and distributed algorithms I build in general purpose languages.

                                    I realized I actually can get similar results on real code if I build systems carefully: schedule multithreaded interleavings at cross-thread communication points, simulate distributed clusters with buggy networks in a single process at accelerated speed a la discrete event simulation, things that use files are communicating with future instances of themselves and you can record logs of file operations and arbitrarily truncate them and ensure invariants hold after restart.

                                    my main project right now is trying to make the above ideas into nice libraries that let people run their code in more realistic ways before opening pull requests, and integrating those tools into the construction process of the sled database.

                                    1. 3

                                      An idle thought which can go on my list of side projects to start “one day” (probably right after the bus accident): probably symbolic execution can be used to demonstrate, if not enforce, the synchronisation of TLA+-type models with code. A symbolic executor can show the different cases a program will execute based on its input and the outputs that result; those can be compared with the cases discovered by the model-checking tool.

                                      Hooray, I’m not the first person to have that idea! You can combine formal methods with symbolic execution and meet in the middle.

                                      1. 2

                                        One idea I had to tackle this issue in a language like Python would be to allow for executable doc-strings within the code that could let you write specs inline, and have those be parsed out (but by default it would use the actual in-code implementation)

                                        While this example was pretty close to the code implementation, TLA+ (and most specification languages) are too flexible to allow easy embedding. Here the processes were actual threads, but they could just as easily be servers, or human agents, or an abstracted day/night cycle. In one spec I wrote, one process represented two separate interacting systems that, at that level of detail, were assumed to have perfect communication.

                                        1. 2

                                          I would love to use my smartphone less. I laze around on it in the morning and at night. I wish there was something that autodisabled my phone when I’m on my bed.

                                          That all said, the smartness of a smartphone is invaluable for me. I use to to look up directions, change plans on the fly, learn about artists when I see artwork, read in the subway. I need a humane smart phone, instead of one filled with apps that aim to colonize my mind.

                                          I sometimes save time by eating out. Quick meals at home are cheaper and less time consuming that going somewhere, ordering, waiting for the food, eating, and coming back. But anything more involved than oatmeal is hard for someone living alone. It takes lots of time to organize recipes, get the ingredients, and cook. You have to choose between eating lots of leftovers or even more time per meal. It can be sensible to eat out if you can, from a time point of view.

                                          1. 3

                                            Regarding smartphone usage, I agree I can’t see myself switching away from a device that has substantial daily utility. For starters you can play with blocking websites. I felt like my productivity tanked when switching from Android to iOS, yet only recently realized you can have the same effect of editing the hosts file via settings-restrictions-websites-limitadultcontent, then adding some ‘never allow sites’

                                            Regarding cooking, I feel like that’s a mindset shift. I went from wanting food to be automated to having a process of cooking some routine dishes (tacos, smoothies, pizzas from scratch, rice/quinoa stirfry) in a way that I pretty much know what ingredients to keep around. Time wise things can be mixed too - stretching while the eggs are cooking, doing a bodyweight workout while the oven is heating up/sauce is getting reduced. I live alone and travel pretty regularly, just try to plan ahead and in the rare case give veggies/fruit I can’t use to a neighbor.

                                            1. 1

                                              Thanks! I didn’t know you could do hosts blocking on iOS. I hadn’t thought about mixing cooking with exercise like that.

                                              I started making avacado toast as a quick foray into cooking. Then I learned a mouse has been eating my bread. So now I’m block on a mouse trap getting shipped over. What a life man.

                                              1. 1

                                                Cool to hear! FWIW, I find myself keeping sliced bread in the freezer then toasting it since I eat a loaf fairly slowly (over the course of a few weeks).

                                            2. 2

                                              My big reasons for not going back to a feature phone are maps and particularly in-car maps (I use Android Auto, and don’t need a separate sat nav as a result). But other than that it just makes me waste time on places like twitter and lobsters. :)

                                              1. 3

                                                I’m the same. By way of a halfway approach, I use a oneplus 5t and Lineage for Microg. There are cheaper, compatible options out there but I plan to keep this phone for a good few years.

                                                Using Lineage for MicroG means I’m not signed into Google services, but still have access to maps and signal if I want them. The F-Droid app store is a lot lighter than Google Play. I haven’t tried Android Auto though. It might be worth a look, although it might not fully meet every use case.

                                                1. 2

                                                  My hopes are for Light Phone 2 now. https://www.indiegogo.com/projects/light-phone-2-design I pre-ordered one, and basic messaging + navigation would be perfect.

                                              1. 1

                                                ah sorry, duplicate of https://lobste.rs/s/j3ytqn/bit_history_about_apple which was posted under an editorialized title.

                                                1. 14

                                                  I disagree, because that will only lead to a morass of incompatible software. You refuse for your software to be run by law enforcement, he refuses for his software to be run by drug dealers, I refuse for my software to be run by Yankees — where does it all end?

                                                  It’s a profoundly illiberal attitude, and the end result will be that everyone would have to build his own software stack from scratch.

                                                  1. 5

                                                    Previous discussions on reddit (8 years ago) and HN (one year ago).

                                                    1. 4

                                                      “It’s a great way to make sure proprietary software is always well funded and had congress/parliment in their corner.” (TaylorSpokeApe)

                                                    2. 1

                                                      I don’t buy the slippery slope argument. There are published codes of ethics for professional software people by e.g. the BCS or ACM, that may make good templates of what constitutes ethical activity within which to use software.

                                                      But by all means, if you want to give stuff to the drug dealing Yankee cop when someone else refuses to, please do so.

                                                      1. 9

                                                        Using one of those codes would be one angle to go for ethical consensus, but precisely because they’re attempts at ethical consensus in fairly broad populations, they mostly don’t do what many of the people wanting restrictions on types of usage would want. One of the more common desires for field-of-usage restriction is, basically, “ban the US/UK military from using my stuff”. But the ACM/BCS ethics codes, and perhaps even more their bodies’ enforcement practices, are pretty much designed so that US/UK military / DARPA / CDE activity doesn’t violate them, since it would be impossible to get broad enough consensus to pass an ACM code of ethics that banned DARPA activity (which funds many ACM members’ work).

                                                        It seems even worse if you want an international software license. Even given the ACM or BCS text as written, you would get completely different answers about what violates it or doesn’t, if you went to five different countries with different cultures and legal traditions. The ACM code, at least, has a specific enforcement mechanism defined, which includes mainly US-based people. Is that a viable basis for a worldwide license, Americans deciding on ethics for everyone else? Or do you take the text excluding the enforcement mechanism, and let each country decide what things violate the text as written or not? Then you get very different answers in different places. Do we need some kind of international ethics court under UN auspices instead, to come up with a global verdict?

                                                        1. -10

                                                          I had a thought to write software so stupid no government would use it but then I remembered linux exists

                                                        2. 4

                                                          It’s not a slippery slope. The example in the OP link would make the software incompatible with just about everything other than stuff of the same license or proprietary software. An MIT project would be unable to use any of the code from a project with such a rule.

                                                      1. 3

                                                        It might work up to a certain point for buyers who otherwise would buy proprietary software. Their EULA’s are already ridiculous. I’ll note that the military has been known to sometimes just steal stuff if they need it. Here’s Army and Navy examples. In theory, they can make it classified, too, to try to block you proving it in court. At that point, you’re trying to beat them with DRM plus online, license checks to reduce the odds of that. That annoys regular customers, though.

                                                        This seems most doable with a SaaS solution.

                                                        1. 3

                                                          buyers who otherwise would buy proprietary software.

                                                          exactly, the freedom to study and share is a pre-sales experience.

                                                          1. 1

                                                            A case where the government settled for $50 million is a bit ambiguous–they suffered a consequence for that theft. If this license led the military to make regular payouts for violating licenses, I would count that as a partial success.

                                                            1. 1

                                                              That was a case where they got caught. Most acts of piracy don’t get caught. More likely in organizations where it’s illegal to even discuss what they’re doing.

                                                          1. 3

                                                            Overall, the story seems quite positive: despite not easily finding a sympathetic audience for their marketing, their Linux product has paid for itself just through word-of-mouth recommendations. Some more marketing work could help it go even further.

                                                            1. 3

                                                              Over on the orange site, commenters are saying that NeXT failed because they didn’t get many customers, then got bought by Apple who didn’t immediately see a turnaround. From a startup perspective though, NeXT managed to live off the funding they could get until they made it to “exit” and they proved to be of immense strategic value to the acquiring company.

                                                              But the lesson I take from NeXT is that they were great - better than many would associate with Steve Jobs - at the compromise. They looked at the Mac (quite closely of course, with so many ex-Apple staff on board, leading to the lawsuit that stopped them competing in the PC market), they looked at the Alto and Smalltalk, and they built the bits of the Alto that the Mac missed with existing technology. The GUI came from Adobe, OOP from Stepstone, the image model was simulated first with removable media then with NFS, with ethernet networking coming from 4.2BSD and Mach, which also made the whole lot accessible to people with existing software. They could have gone down the Be route and started from scratch, but they licensed and Free Softwared their way to having a product.

                                                              1. 2

                                                                Yeah I agree that you often have to cobble something together from existing parts to achieve something big. That’s one of the lessons I learned from reading Stallman’s biography. As far as I remember, GCC, Emacs, and many other GNU projects all started from borrowed code.

                                                                They didn’t just start typing in a blank text file. That would have pushed it from a “huge undertaking” to “unfinishable”.

                                                                The key though is you have to deeply understand all those parts and not just blindly copy them… otherwise you’ll get an incoherent system. But if you choose carefully, this can really accelerate the project and make it possible to ship.

                                                              1. 2

                                                                back on a cryptography thing at work, integrating vault’s transit backend into our app. also considering modernising a back office tool by deleting it and using our CRM for its purposes, if that is possible.

                                                                non-work: final choir practice before Warwick Folk Festival. playing some violin duets with a colleague. more writing of OOP the Easy Way.

                                                                1. 5

                                                                  Today, hung out in Coventry with a friend. There’s a few interesting galleries in the Herbert at the moment, including a retrospective on Rare with a huge behind the scenes on Sea of Thieves, and a history of games and toys. Managed to get another release of OOP the Easy Way out too.

                                                                  Tomorrow: probably more writing, I’m itching to get out of Part One of the book but it’s the place where I’ve already done most research. Also playing and dancing at a local church fundraiser.

                                                                  1. 1

                                                                    of course, the tomorrow bit didn’t happen. I moved furniture and slept.

                                                                    Responding to change over following a plan.

                                                                  1. 2

                                                                    ducky shine 6 with cherry MX brown switches. it’s got the gamer-ish rgb lights with about a billion different customisable zones, so I’ve set it to a static colour.

                                                                    1. 5

                                                                      There have been good and not good points about all of my jobs, the one I have fondest memories of is when I worked for now-defunct iOS app studio Agant. I joined at the inception of a very exciting project for me: I was the lead developer of the Discworld Ankh-Morpork app for iPad. I was working with a team of friends, many of whom were fans of the books, we met Sir Pterry, and the folks at the Discworld Emporium in Wincanton, we had a lot of fun, and we made a thing we were all proud of. There were significant technical challenges (keeping the app responsive at 60fps, within memory and space constraints, ended up meaning replacing some Foundation data structures with app-specific alternatives and running the Instruments profiling tool before committing any potential change), schedule challenges, and competing requirements on our time from other projects but it all felt worth working through.

                                                                      Unfortunately this time also coincided (not a coincidence, obviously) with the beginnings of burnout. I loved that job and wanted to stay longer, but the company wound up around the time of my first workiversary and we were all laid off. I found another job that should’ve been great, but ended up quitting through actual burnout and depression after a few months. Then the same happened after my next job, and basically I have not had the same levels of joy, excitement or fulfilment out of work since.