1. 74
    • Stream start

    • Some dunes. Violin. A woman playing drums. Various landscapes. More people playing instruments of sidewalks. Californiaaaaaaaaaaaa!

    • Tim here. Apple really likes California.

    • Apple TV+. It has shows, so I hear. Some of them even won awards; 130 wins, 500 nominations. 35 Emmy awards if that counts for anything. Trailer for shows.

    • iPad. It’s really popular because it’s versatile, so I hear. Pencils exist. iPad OS. There’s also apps. What a world! 40% growth. Base model improvements, looks to have the same classic design as before with a home button.

    • Melody to talk about the new model on the pier. A13 Bionic SoC, 20% improvement across the board from the predecessor’s A12. Curbstomps Chromebook and Android tablet specs either way. Front camera improvements; 12MP ultrawide with centre stage feature. As with the last event, centre stage works in third party applications. True tone on this one too. Compatible with previous generation’s accessories and first gen pencil.

    • It’ll ship with iPad OS 15. Widgets like on a phone. Easier multitasking. Quick note.

    • Base model $329 with 64 GB. Edu discount is $299. Two colours, cellular available. Available next week.

    • Back to Tim. Now it’s about iPad mini. It’s used in critical verticals due to its size. Big upgrade to it. This one has the same design as iPad Pro.

    • Katie on stage. It comes in colours. “Liquid Retina” display. Bezelless 8.3” screen in the same footprint. Wide colour, 500 nits, lowest reflectivity, true tone. Touch ID is in the power button. It’s fast like you expect. 40% jump in perf over prev gen CPU, 80% for GPU. It has a NN processor, 2x improvement over prev gen. Translate app improvements. Type C port. Wide array of USB accessories. 5Ggggggggggg. Examples of real-world examples of both. Both front and back cameras improved. Back 12MP, larger aperture, true tone flash, focus pixels. New ISP w/ improved HDR. 4K recording. Front 12MP ultrawide also with centre stage. Stereo in landscape. Even more accessories. Cases that match. Uses the second gen pencil. Demo video.

    • 499$ base, cellular option. Available next week. Yes, environment. 100% recycled Al chassis. Now all the iPads have them.

    • Tim Apple. iPads still doing well. Watch. It wants you to touch grass more than you do already.

    • Jeff. Cyclist features. It detects bike activity in fitness, automatic pause and resume. Fall detection on bikes. E-bike support. New Watch. Series 7. Bigger display. 20% more screen area over previous generation. 1.7mm bezel, 40% thinner than before. Dimensions are about the same, but with softer corner. Light refraction at the edges for better looking curvature. 70% brighter indoors. UI tweaked for it.

    • Lauren. Touch targets are bigger to match the new screen. 50% more text on screen. More text input. Full keyboard for tap or slide. ML keyboard swiping. More watch faces, including denser complications. More durable; crack resistant front screen due to materials/geometry. IP6X for dust. WR50 water resist. 18 hour battery life. Charges 33% faster. Fast charger is Type C based. 45M from 0 to 80%. 8 minutes of charging for 8 hours of sleep tracking.

    • Jeff. 5 Al colours. Other colours too. Nike and Hermes bands if you’re into the #brand. More bands, and compatible with previous bands. 100% recycled metal cases and magnets. Series 3 of 199$, SE at 279$, Series 7 at 399$. Series 7 later avail this fall. Demo video. watchOS 8.

    • Jay on the Fitness+ thing they have. 4K fitness videos and it syncs with the watch, if that’s your thing. It’s available in more countries; subtitled fitness videos with English audio??? Walking podcast??? There’s an entire cast of trainers.

    • To Sam to talk more about available workout types. Jessica with more workout types. Pilates, meditation, winter sports, etc. (The meditation and walking stuff looks vaguely parasocial to me, but…) Bakari with more workout types.

    • Back to Jay. Group workouts from iMessage/FaceTime. 32 people at once. Demo video. 6-core 2H/4P cores. ML cores. 50% faster than the competition. 4-core GPU.

    • Tim. iPhone. Yes, they’re ahead as we know, and have privacy, so they claim. ANOTHER ONE. Looks similar to 12, but the front cameras are arranged diagonally?

    • Kaiann to talk about iPhone 13. It’s flat like the 12. More durable front glass. IP68 water resist. Al frame in new colours. Front notch is less wide, 20% smaller. Environment! The aerial lines are made from recycled plastic. Inside the case makes room for a bigger battery. Also a mini. “Super Retina XDR”. 800 nits max, 28% brighter. HDR at 1200 nits. OLED with good contrast. P3 colour. Vision, HDR10, HLG. SoC is A15 Bionic.

    • Hope to talk about this. Apple widens their lead here. 5nm. 15B transistors. 2 fast/4 efficiency cores. 4-core GPU. Quite a lot faster than competition; 50% at CPU, 30% at GPU. 15.8T ops neural engine. Apps really using that ML dingus! Twice the cache, new video dec/enc. (I missed some of this due to it not saving, grrr.)

    • Kaiann. More camera/ISP. Wide camera mode. 47% more light. Finer pixels. f/1.6. Bigger sensor. 12 Pro Max enhanced OIS will be in the base/mini 13 camera. Better low light performance. Ultra wide mode improvements. Video. Cinematic mode.

    • Johnnie about this. “Rack focus” on iPhone. You can shoot like, real movies on it now. Demo video of the focus changing ability. It can hold focus on moving subjects, and change subject in real time. Anticipates people entering, and changes when people gaze away. It’s based on the study of cinema, fed into the ML woodchipper. Tap to change subject. Again to lock. Shoots in Vision HDR. Live grading.

    • Kaiann. 5G. Custom aerials and radios for more 5G bands. 200 carriers in 60 countries soon. Battery life improvements despite the things they put into it. Mini has hour and a half longer. Normal gets 2.5 hours than previous. Part optimization, part just putting more battery in there. Shifts to LTE instead of 5G when indeed. Privacy: on-device speech recognition and the private relay/mail stuff. Accessory ecosystem, like the MagSafe (yes, they reused the name) stuff. Cases in a variety of colours. And a wallet with Find My, if that’s your thing.

    • 699 for mini, 799$ for normal base price. 128 GB for base model, 512 GB option. Demo video.

    • Tim. iPhone Pro. ANOTHER ONE (DJ KHALED VOICE). Really emphasizing the cameras in the video. Three of them (actually maybe four). 13 Pro, as anyone could have guessed.

    • Joz on stage. It’s more pro-er. Stainless steel with fancy finish. Four colours. The blue one required fancy ceramic finish. Also a smaller notch. Fancy glass. Again, tougher glass. IP68. Internal redesign to fit bigger batteries and cameras. Also the same accessories. And yes, a Pro Max. Same A15 SoC. New ISP and display engine. Pro models get a 5-core GPU. 50% than leading competition. “Super Retina XDR”; OLED. 1000 nits peak outdoors. ProMotion 120 Hz display that can go to down to 10 Hz, adaptive framerate (and response time). Lowers and raises framerate as needed for battery/smoothness. Third-party apps and games can take advantage. 6.1 or 6.7” panel.

    • Louis on cameras. 77mm telephoto, 3x optical zoom. Ultrawide with f/1.8. Wide f/1.5 with denser pixel arrangement. (Having stream difficulty.) More optical zoom for those. Better low light. Macro mode in ultrawide mode without a special lens. Minimum distance of 2cm. All have night mode.

    • Rebecca on software camera stuff. Improvements based on analysis of existing shots, better recognition of skin tones. Photographic styles for applying tweaks into the real-time processing pipeline.

    • Joz. Also has video features. They got like, real Hollywood people to shoot some video with the Pro model. They say it’s easier with less things to buy. You can just use normal commodity equipment to do fancy stuff now. It’s also smaller, which might be good for some different techniques? Recap on the camera stuff. Depth map in the videos, so it can be changed after shooting. Later this, ProRes video supported, ideal for editing. HW accelerated ProRes and faster NVMe makes it possible; can be done at 4k30frames. Battery life improvements. Optimization and bigger battery, all-day battery. 1.5 hours than the pro, 2.5 than the pro max. Environment! No plastic wrap outside. Recycled magnets and tin solder.

    • Pro starts at 999$, Max at 1099$. 1 TB storage option. 128 GB base. Preorder Friday, available 24th.

    • Tim. Recap. Bye! More California.

    1. 21

      2 fast/4 efficiency cores

      Took me a second to realize you weren’t making a “Fast and Furious” joke. I’m still disappointed about it.

      1. 19

        You are going $deity’s work. Thank you.

        1. 2

          thank you for such a great summary !

          There were rumors that Apple Watch 7 would have glucose monitoring, I guess that did not materialize.

          1. -1

            Blood-sugar monitoring being considered as a reasonable mainstream feature in consumer electronics is proper dystopian stuff. What a society we’ve built for ourselves, at least in North America.

        1. 2

          I can’t wait to try Shenandoah and ZGC in production (one of my services does over 1.5K RPS with G1). I’ve moved over to Kotlin long time ago seeing same features land in Java, and more JVM improvements get me excited and confident in future of JVM ecosystem.

          1. 12

            It really is an exciting time to be working in a JVM language. I too moved over to Kotlin a while back, but I still closely follow what’s going on in Java.

            My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released. Loom (virtual threads) in particular should drive a stake through the heart of async/await or reactive-style programming outside a small set of niche use cases, without sacrificing any of the scalability wins. Valhalla (user-defined types with the low overhead of primitive types) and Panama (lightweight interaction between Java and native code) will, I suspect, make JVM languages a competitive option for a lot of applications where people currently turn to Python with C extensions.

            1. 2

              My hunch is that a lot of people who currently dismiss Java and the JVM as slow bulky dinosaur tech are going to be shocked when some of the major upcoming changes get released

              I agree with this re the JVM, but isn’t Java mostly releasing language-level changes that are just catch-up with things that have been commonplace elsewhere for years?

              1. 4

                That’s a fair point, sure.

                Maybe a better way to frame it is that as language changes roll out, it’ll get harder to point to Java and say, “That’s such an obsolete, behind-the-times language. It doesn’t even have thing X like the other 9 of the top 10 languages have had for years.”

                Of course, Java will never (and should never, IMO) be on the bleeding edge of language design; its designers have made a deliberate choice to let other languages prove, or fail to prove, the value of new language features. By design, you’ll pretty much always be able to point to existing precedent for anything new in Java, and it’ll never look as modern as brand-new languages. My point is more that I think the perception will shift from, “Java is obsolete and stagnant” to, “Java is conservative but modern.”

              2. 1

                my Android client app shares code with backend (both are in Java).
                The android’s Java is at about JDK 8+ level (https://developer.android.com/studio/write/java8-support-table ) The backend is currently using JDK 11.

                So sharing the code between client and backend is becoming more challenging.

                I think, if I am to move backend to JDK 17, then it will be harder to share code (if take advantage of JDK 17 features on the backend).

                I guess the solution is to move both backend and frontend to Kotlin… but that’s a lot of work without significant business value.

                1. 2

                  Nothing prevents you from using JDK 17 with Java 8 language features level, essentially marking any use of new features a compile errors and making sure your compiler produces Java 1.8 compatible bytecode. That’s what we do in our library that needs to be JDK 8+ (but we use JDK 8 on the CI side to compile the final JARs to be on the safe side). Then you can run that code on JVM 17 on the server and take advantage of the JVM improvements (but not the new language features). We have decided to add Kotlin where it makes sense gradually instead of a full rewrite (e.g. when we’d want to use coroutines).

                  1. 2

                    You could also just stick to 11. It’ll be supported for years.

              1. 2

                A related thought (but likely outside of the goals for catabase) – it would be really great if there is a method/science that can help to identify if the data set I have at hand, forms a Group or a Category

                That is: I give a bunch of JSON objects, where I ‘mark’ which attributes compose the ‘unique ID’ of a given JSON object. And in reply, the magic method, gives me back:

                : “under ‘string concatenation’ this set object forms a non-abelian group.”

                That is all JSONs in the set I provided, have unique IDs, and every unique ID was formed by concatenation of other Unique IDs.

                With that capability working, the next feature would be: take any JSON fields in given objects (as long at it is present in every JSON) – and run a ‘simulation’ to see which one of those fields form ‘IDs’ that make category under which operation.

                With the overall goal to find categorical structures in a given set of data.

                1. 1

                  In case others are interested in the ETL process to get the data into the state needed by the ACE engine, there is a link within the article

                  “.. Details of the ACE extract-transform-load (ETL) from the OMOP CDM are provided in the Supplementary Materials .. .”

                  The linked doc outline:

                  “… ACE in-memory datastore and ETL Generating an ACE in-memory datastore of patient objects involves an extract-transform-load (ETL) process to convert patient records from a data source into the ACE datastore format. This ETL process consists of the following steps:

                  1.  Download data from a data source (e.g. a SQL database, flat files or Apache Avro files). 
                  2.  Materialize source data dictionaries as well as join operations between different tables. Sort the data to enable construction of individual patient data objects.
                  3.  Use hierarchical information from source data dictionaries (e.g. disease groupings from ICD) to expand source data records (e.g. expand patient records of ICD9 250.00 to include ICD9 250). Generate individual patient data objects.
                  4.  Generate feature indices that map each patient object to the feature instances it contains, to enable fast lookup.
                  5.  Convert patient data objects into compressed in-memory objects that do not require de-serialization.
                  

                  …”

                  1. 6

                    I think for application developers, a big potential use of SMT is checking rule equivalences. If we have set of business rules A and want to add rule z, what’s an input that satisfies A but not A+z? I’ve toyed a bit with this but haven’t tried it in anger yet, though I know at least one person who successfully did it.

                    1. 2

                      One similar use case I often come across is whether a given SQL Where clause statement ‘covers’ several others.

                      This often comes up in caching and ETL.

                      In caching – you want to know if a cache that was created with where.clause.1 would contain items for where.clause.2 and where.clause.3 .

                      The same with ‘request caches’. Where an a cache for API.call.1 will contain a superset of entries for API.call.2 or.3

                      In ETL this comes in many batch jobs. Batch jobs are usually limited to a subset of data, and the question often about whether the ‘data perimeter’ of batch job 1 contains subsets for jobs 2 and 3.

                      Properly identify subsets of data retrieved by APIs or by SQL reduces the need for ‘duplicates’.

                    1. 1

                      What’s the tl;dr on this? Is it using jails?

                      1. 6

                        It’s trying to reuse the whole of the OCI infrastructure, rather than reinvent something more FreeBSD-like (in contrast to something like pot, which may be more the ‘FreeBSD way’ but which is fighting an entrenched ecosystem formed by FreeBSD ignoring containers for a decade). This means:

                        • Containerd is responsible for managing containers. It can talk to distributed orchestration frameworks, such as Kubernetes. It is responsible for things like assembling containers out of individual layers (and already supports ZFS, so has a clean path to working easily on FreeBSD).
                        • A containerd shim is responsible for launching containers. On Linux, this is typically runc, which manages the mess of cgroups, namespaces, and seccomp-bpf that gives a vaguely jail-like abstraction. In this stack it will typically be runj, which manages jails. On Linux it can also be something layered on top of KVM, on FreeBSD eventually I hope that there will be something on top of bhyve.
                        • Moby (Docker is the branded version of the Moby open-source project) that is mostly responsible for building containers. This is the thing that reads a Dockerfile and does stuff based on what it says.

                        There are a few things that could be improved in the base system for this to work well:

                        • Currently, ifconfig has to be run from inside the jail to set up networking, which means that runj needs to be extended to support running a command in the jail before it runs the real container entry point. This could be fixed if ifconfig could jail_attach itself and configure networks for jails from the host. There were some patches to do that under review, I don’t remember what happened to them. This is also really annoying for running Linux containers because you need to inject a FreeBSD ifconfig binary into the Linux system.
                        • There’s no base-system abstraction over the multiple different firewalls. This pulls in a dependency on pf, which seems to be what everyone (including me) uses but it’s somewhat unfortunate that people using the others are left out. In my ideal world, the project would pick one and provide compat shims that translated rules in either format into a common in-kernel representation.
                        • The FreeBSD base system is pretty big. There’s a load of stuff like the toolchain and svnlite that are not needed for 99% of containers. Actually, those two are the only things larger than 1 MiB in /usr/bin and if you remove them you remove about 80% of the size of /usr/bin. Removing the debug symbols and static libraries from /usr/lib eliminates 2.3GiB. /rescue (statically linked programs for recovery if rtld is broken) add another 15 MiB and are completely pointless in a container (if the contents of a container is broken, regenerate it). All of these things can be turned off with build flags and in PkgBase are separate packages, so it would be quite easy to provide a minimal container image.
                        1. 5

                          Can you add something to the website on how to send patches to update the website it self? I would like to add MSVC resources for example :)

                          1. 1

                            Yes @fcambus, if there is a way for others to send you patches or update the content like in a wiki – it would be great. Your list is useful as it kind of creates a quick overview of the tooling available.

                            I would add JDK/Gradle toolchain. But would indicate that for, say NetBSD 9.x this toolchain does not work, unfortunately.

                          1. 12

                            Another nice project with an S3 compatible API is seaweedfs (née weedfs): https://github.com/chrislusf/seaweedfs, inspired by the haystack paper (from FB, when FB had around 1-2K photo uploads per second); we use it in production (albeit not in distributed mode). A lightning talk I did a few month ago: https://github.com/miku/haystack

                            1. 1

                              if you do not mind, a question – did you find any solutions that are based on JDK 11+ (Java/clojure/scala, etc) – I am looking for a file store open source lib, but I would like it to be compatible with a JVM ecosystem.

                              1. 1

                                Interesting, I’d assume a JVM ecosystem would permit non JVM code. Is it a JVM client library you want?

                                1. 1

                                  Not the OP, but I’ve heard that some banks will refuse to deploy any code that doesn’t run on the JVM.

                                  1. 1

                                    Wow, do you have perhaps an example, or country of a possible example?

                                    I know crux db is on the JVM, and they can use and even encourage their object store to be on Kafka (famously JVM)

                                    1. 1

                                      Unfortunately, no. This was just word-of-mouth from people in adjacent businesses so feel free to take it with a grain of salt.

                                      The general contour of reasoning was that with security being a top concern, they prefer to deploy code in ways that are familiar.

                                  2. 1

                                    Thank you for the follow-ups. I would like the whole service to be packageable and accessible as a JAR that I can incorporate in our ‘uber’ JAR.

                                    The backend I am working on, has one of its ‘features’ – a simple deployment. In the translation, it means a single-jar + PostgresSQL.

                                    The single-jar within it, has about 20 ‘micro-services’, essentially. So a user can start one up just one jar and have everything in ‘one host’ or start the jar with config params telling the JAR which microservices to start on which host. That configuration file is like a ‘static’ service orchestrator. It is the same for all the hosts, but there are sections for each host in the deployment.

                                    One of the microservices (or easier to call them just services) I am going to be enhancing is a ‘content server’. Today the content service basically needs a ‘backend-accessible-directory’.

                                    That service does all the others things: administering the content, acting as IAM Policy Enforcement Point, caching content in a memory-mapped db (if a particular content is determined to be needed ‘often’), a non-trivial hierarchical directory management to ensure that too many files do not end up in ‘one folder’, and so on.

                                    I need to support features where files are in a ‘remote content server’ (rather then in a locally accessible directory). Where the content server is an S3 (or some other standard compatible system) So I would like the ‘content server’ to be included as a 21st service in that single JAR.

                                    Which is why, I am not just looking for a client, but for the actual server – to be compatible with JVMs. Hope the explanation gives more color to the question I asked.


                                    With regards to other comment where folks mention that some organizations like banks – prefer a JVM only code. That’s true to a degree, it is a preference, not an absolute requirement though.

                                    That’s because some of these organizations have built by themselves ‘pre-docker’ deployment infrastructures. Where it is easy to request a ‘production capacity’ as long as the deployed backend is a JAR (because those deployment infrastructures are basically JVM clusters that support migrations, software defined networks, load balancing, password vaults, monitoring, etc)

                                    So when a vendor (or even internal team) comes in and says: for our solution we run on docker, it is OK, but they have invested millions… and now want to continue to get benefits (internal payment basically) for their self-build infrastructure management tools … Which is why there is a preference for JVM-only solutions and, perhaps, will be for some time.

                                    And to be honest, JVM (and JVM based languages) and their tools ecosystem continues to evolve (security, code analysis, performance, etc) — it seems that the decisions back then about investing into managed infrastructure around JVM – were good decisions.

                              1. 2

                                I was curious to see if this was built on top of GTK, but it seems like desktop Linux/BSD is not a target at all.

                                1. 3

                                  So much for “Write cross-platform apps in XAML and C#, from a single shared code-base”.

                                  1. 2

                                    https://docs.microsoft.com/en-us/dotnet/maui/supported-platforms mentions that Linux support is provided by the community. Maybe somebody else can comment on the quality. Honestly, it feels like a lost opportunity/oversight if I’m generous and if not, it just goes on to highlight that not much has actually changed about M$ w.r.t. to Linux, i.e. do the bare minimum to make money of/with it but not more.

                                    1. 7

                                      It’s difficult to support ‘Linux’ because there’s no ‘Linux’ GUI toolkit. It’s possible to support Android. It’s possible to support GTK. It’s possible to support Qt. If you support GTK, the KDE folks will hate you. If you support Qt, the GNOME folks will hate you. In both cases, you’re taking dependencies on LGPL’d things and everyone shipping the code needs to understand the legal obligations that this comes with (the cross-platform bits are all MIT licensed so the legal obligations are trivial to comply with).

                                      If the community maintains GTK or Qt (or both) integrations then anyone wanting to use them is picking up an extra dependency (the community-maintained version) and needs to do the same due diligence on licensing that they’d need to for any other dependency.

                                      Personally, I’d love to see something based on Skia added for a completely portable MIT-licensed GUI toolkit but it’s not something I have time to work on.

                                      1. 1

                                        see my comment above, it seems that they are moving in that direction. Microsoft.Maui.Graphics supports Linux via GTK and states that it implements SkiaSharp APIs.

                                        Although, since you work your probably have more preview then just my searches

                                        1. 3

                                          see my comment above, it seems that they are moving in that direction. Microsoft.Maui.Graphics supports Linux via GTK and states that it implements SkiaSharp APIs

                                          The links you found make it look as if they’re moving towards a pure .NET widget set. SWING did that and it was awful for a couple of reasons. The first is that Java was painfully slow 20 years ago. The CLR is a lot better than a 20-year-old JVM and computers are a lot faster, so that’s not a problem. The second is common with things like Qt that do the same: Your apps don’t feel native unless you put in a lot of work. For example, on macOS it took 15 years for Qt to use the same keyboard shortcuts for navigation in a text field as every other application on the system. NSTextView automatically integrates with text services so I can, for example, hit a shortcut key in any NSTextView with rich text enabled and have a LaTeX math expression replaced with a PDF rendering of it (which embeds the source in the PDF metadata so I can hit another key combination to get back the original). There’s a huge amount of plumbing required to get that kind of thing right and you have to redo a lot of if if the system APIs change.

                                          For X11/Wayland, the UIs are so inconsistent that it probably doesn’t matter. For iOS / macOS, it’s really noticeable when things bring their own widget set (or even don’t properly port their UI. For example, command-shift-z is redo on every Mac application, except on Office where it’s command-y). For Windows / Android it’s mildly annoying but there’s already quite a bit of inconsistency.

                                          Although, since you work your probably have more preview then just my searches

                                          I try to meet up with some of the .NET folks when I’m in Redmond to get their perspective on Verona, but I haven’t managed to visit for two years because of the pandemic. I’m not working on anything in the .NET space so I don’t know anything that isn’t public.

                                        2. 1

                                          Personally, I’d love to see something based on Skia added for a completely portable MIT-licensed GUI toolkit but it’s not something I have time to work on.

                                          I haven’t looked but I’d assume that’s what Flutter uses across the board, incl. the recently announced Linux implementation.

                                          1. 1

                                            Flutter, unfortunately, is tightly coupled with Dart. This is great if you want to use Dart, but it means that you don’t have a language-agnostic toolkit and so doesn’t look like something that MAUI could use.

                                        3. 2

                                          There is

                                          https://github.com/dotnet/Microsoft.Maui.Graphics

                                          “…. Microsoft.Maui.Graphics is a cross-platform graphics library for iOS, Android, Windows, macOS, Tizen and Linux completely in C#. With this library you can use a common API to target multiple abstractions allowing you to share your drawing code between platforms, or mix and match graphics implementations within a singular application.

                                          …”

                                          Even now (de-prioritized ?) Tizen is there.

                                          The Linux seems to be supported via GTK

                                          This is a graphics only library (means Canvas + fonts + PDF ), not a library of GUI controls. The graphics library implements mono’s SkiaSharp APIs. Noteable that mono’s SkiaSharp itself does not appear to support Linux, at least from its readme). So MAUI’s support for Linux’s graphic’s primitives is moving in a right direction, compared to previous SkiaSharp.

                                          There is an experimental project on top of Microsoft.Maui.Graphics, that aims to build Controls for all the operating systems:

                                          https://github.com/dotnet/Microsoft.Maui.Graphics.Controls

                                          However, this project does not have anything in its https://github.com/dotnet/Microsoft.Maui.Graphics.Controls/tree/main/src/GraphicsControls/Platform for Linux, yet.

                                          I certainly would hope that .NET ecosystem would support Linux and the 3 BSD as first class citizen. MAUI’s summary emphasizes support for various sensors, and I think Linux and the couple of BSDs have a reasonably noticeable presence in the device + sensors space.

                                          Also I wish MS would have a consolidated, easy to consume strategy for all of their cross platform UI efforts (graphics, controls, etc) – right now it is really difficult to grok.

                                    1. 1

                                      Thank you for sharing.

                                      What kind of layout engine does it have? Can I change the layout (not the code) based on screen size (eg when the user resizes/screen rotation/etc) ? Is there a notion of horizontal/vertical , left/center/end, etc layout primitives ?

                                      1. 3

                                        The library is built on top of racket’s built-in racket/gui lib so it shares the same layout management. It is possible to control things like alignment, stretch and size and any view that supports setting those properties also supports having them passed in as observables (so, for example, you could make an observable that tracks the current window size then derive values to control the layout of things based on it).

                                        1. 1

                                          ah, thank you. I constrained my search just in your docs, and did not realize the above. Will study this.

                                      1. 10

                                        I think an important direction for future programming language development is better support for writing single programs that span multiple nodes. It’s been done, e.g. erlang, but it would be nice to see more tight integration of network protocols into programming languages, or languages that can readily accommodate libraries that do this without a lot of fuss.

                                        There’s still some utility in IDLs like protobufs/capnproto in that realistically the whole world isn’t going to converge on one language any time soon, so having a nice way of describing an interface in a way that’s portable across languages is important for some use cases. But today we write a lot of plumbing code that we probably shouldn’t need to.

                                        1. 3

                                          I couldn’t agree more. Some sort of language feature or DSL or something would allow you to have your services architecture without paying quite so many of the costs for it.

                                          Type-checking cross-node calls, service fusion (i.e. co-locating services that communicate with each other on the same node to eliminate network traffic where possible), RPC inlining (at my company we have RPC calls that amount to just CPU work but they’re in different repos and different machines because they’re written by different teams; if the compiler had access to that information it could eliminate that boundary), something like a query planner for complex RPCs that decay to many other backend RPC calls (we pass object IDs between services but often many of them need the data about those same underlying objects so they all go out to the data access layer to look up the same objects). Some of that could be done by ops teams with implementation knowledge but in our case those implementations are changing all of the time so they’d be out of date by the time the ops team has time to figure out what’s going on under the hood. There’s a lot that a Sufficiently Smart Compiler(tm) can do given all of the information

                                          1. 3

                                            There is also a view that it is a function of underlying OS (not a particular programming language) to seamlessly provide ‘resources’ (eg memory, CPU, scheduling) etc. across networked nodes.

                                            This view is, sometimes, called Single Image OS (briefly discussed that angle in that thread as well )

                                            Overall, I agree, of course, that creating safe, efficient and horizontally scalable programs – should much easier.

                                            Hardware is going to continue to drive horizontal scalability capabilities (whether it is multiple cores, or multiple nodes, or multiple video/network cards)

                                            1. 2

                                              I was tempted to add some specifics about projects/ideas I thought were promising, but I’m kinda glad I didn’t, since everybody’s chimed with stuff they’re excited about and there’s a pretty wide range. Some of these I knew about others I didn’t, and this turned out to be way more interesting than if it had been about one thing!

                                              1. 2

                                                Yes, but: you need to avoid the mistakes of earlier attempts to do this, like CORBA, Java RMI, DistributedObjects, etc. A remote call is not the same as an in-process call, for all the reasons called out in the famous Fallacies Of Distributed Computing list. Earlier systems tried to shove that inconvenient truth under the rug, with the result that ugly things happened at runtime.

                                                On the other hand, Erlang has of course been doing this well for a while.

                                                I think we’re in better shape to deal with this now thanks all the recent work languages have been doing to provide async calls, Erlang-style channels, Actors, and better error handling through effect systems. (Shout out to Rust, Swift and Pony!)

                                                1. 2

                                                  Yep! I’m encouraged by signs that we as a field have learned our lesson. See also: https://capnproto.org/rpc.html#distributed-objects

                                                  1. 1

                                                    Cap’nProto is already on my long list of stuff to get into…

                                                2. 2

                                                  Great comment, yes, I completely agree.

                                                  This is linked from the article, but just in case you didn’t se it, http://catern.com/list_singledist.html lists a few attempts at exactly that. Including my own http://catern.com/caternetes.html

                                                  1. 2

                                                    This is what work like Spritely Goblins is hoping to push forward

                                                    1. 1

                                                      I think an important direction for future programming language development is better support for writing single programs that span multiple nodes.

                                                      Yes!

                                                      I think the model that has the most potential is something near to tuple spaces. That is, leaning in to the constraints, rather than trying to paper over them, or to prop up anachronistic models of computation.

                                                      1. 1

                                                        better support for writing single programs that span multiple nodes.

                                                        That’s one of the goals of Objective-S. Well, not really a specific goal, but more a result of the overall goal of generalising to components and connectors. And components can certainly be whole programs, and connectors can certainly be various forms of IPC.

                                                        Having support for node-spanning programs also illustrates the need to go beyond the current call/return focus in our programming languages. As long as the only linguistically supported way for two components to talk to each other is a procedure call, the only way to do IPC is transparent RPCs. And we all know how well that turned out.

                                                        1. 1

                                                          indeed! Stuff like https://www.unisonweb.org/ looks promising.

                                                        1. 4

                                                          It hurts me to see how much C++ neglects the problem of compile time reflection, even though it’s absolutely crucial in:

                                                          • Entity component systems (ECS), e.g. for video games and 3D engines
                                                          • Marshalling/unmarshalling of data packets
                                                          • Serialization/deserialization of JSON, XML, etc.
                                                          • Storing and loading objects to and from databases
                                                          • Printing or dumping object contents, e.g. for debugging purposes

                                                          And every nontrivial application will grow to do at least one of these things. It’s just such a pain… I ended up writing a set of C++98 (I need to support ancient compilers) compliant macros that are minimally obscure and give me a powerful reflection API. You have to wrap the class/struct member declarations into a macro but it’s rather painless. Since then I haven’t used anything else. But since I’ve learned Rust, the ugly kludges that are required all day every day to implement ordinary C++ applications give me headaches and I started pivoting away from C++ wherever possible.

                                                          1. 3

                                                            C++ hasn’t neglected compile-time reflection, it’s just really hard to get right and once you’ve defined the interfaces for it then you’re stuck with them basically forever. It’s likely to be part of C++23, the reflection TS has been gradually approaching convergence for the last few years.

                                                            1. 1

                                                              The first C++ standard was released in 1998. We now speculate that compile time reflection will be part of the standard in 2023. That’s a quarter century later than it should have been made part of the language. During this time, the C++ committee introduced a myriad of less useful language features. Yes, that is neglect. Do you know how long it will take for this feature to be supported on embedded devices whose compilers notoriously lag behind? Maybe in 2030 or 2040 I will be able to use compile time reflection in a real product.

                                                              1. 1

                                                                Looking forward to it. I love the kind of stuff you can do in Zig — like writing a “for” loop over the fields of a struct. So intuitive!

                                                                Are there any articles that describe the C++ reflection proposals in layman’s terms?

                                                                1. 1

                                                                  There is 4 year-old article with title: ‘Reflections on C++ reflection proposals

                                                                  I like the title quite a bit :-)

                                                              2. 1

                                                                You have to wrap the class/struct member declarations into a macro

                                                                Yep, I have been doing the same. I am thinking to do ‘an upgrade’ (for my ’09 code) to, now, use const expressions (although, I also like the approach proposed in [1] with CMake).

                                                                I would also add to your list of critical ‘capabilities’ of a modern programming language, where reflection is needed – is the support of declarative and reactive GUI.

                                                                Without reflection feature, every GUI library developer – brings their own reflection feature.

                                                                Mature languages, like C++ need something like –error-on-undefined-behavior-features compile time switch, such that when used the compiler will error out on usage of library calls, idioms, language features that do not have a well defined standard behavior.

                                                                This switch should also affect if a given precompiled ‘module’ can be used or not.

                                                                This way, may be we could the language standard and its implementation to evolve a little faster. And the community usage around new features will grow faster.

                                                                [1] https://onqtam.com/programming/2017-09-02-simple-cpp-reflection-with-cmake/

                                                              1. 3

                                                                Out of the list of the ‘good ideas’ listed in the article, I did not see an idea describing ‘functional buffers’.

                                                                Meaning that reads and writes to/from central database happen ‘occasionally’ and not as often as client’s move the data. Writes are batches and then ‘synched up’ in micro-batches (at several seconds intervals). Reads are cached into local data stores (something like LMDB) or other ‘smart disk caching’ alternatives, and most data reads from there. In memory caches on the ‘backend servers’ cache within limits as well.

                                                                This particular approach works when read data can be ‘eventually consistent’ for some operations, and writes do not need to be immediate.

                                                                Perhaps this idea is ‘somewhat’ reflective in the ‘active/active’ paragraph in the article. But my preference in designs/architectures – is building a ‘custom’ buffering layer, rather than asking an active/active distributed system to synch up the data. Because not all ‘business functions’ fit into that model.

                                                                I tend to employ the above approach in mobile apps world – where not all user actions are ‘sent immediately’ to the backend, but instead stored locally and then synched up periodically to the backend. This helps also if a particular user action cancels another ones before it.


                                                                Furthermore,

                                                                To optimize ‘hardware affinity to the database engine’, I gravitate ( for large systems), to never use database triggers or ‘percolations’, or anything like that. With that, if possible, I prefer a design, where in one particular point in time, only a particular type of database access is happening across all the clients (although this is really hard/impossible to do across the whole 24 h period).


                                                                Finally, I tend to avoid designing a system where write-heavy database is deployed in any form of containers (docker/BSD jails/etc). My understanding of the I/O scheduling and hardware sharing challenges could be outdated, though, so I could be wrong on this.

                                                                1. 4

                                                                  Yeah, I like this in memory trick and the other stuff you said. :) A lot of people are chiming in that this info is out of date. I’m really interested in “modern tooling” but then the suggestions have trade-offs (eventual consistency, complexity, systems of systems, nosql). They aren’t wrong. The active-active Sun horror story is from 2004 and that was peak context for me before switching to dev. So that’s good insight from those that are saying this is old.

                                                                  But I think CAP theorem survives forever. Alternatives have tradeoffs. You can’t have it all. Traditional RDMSs are everywhere and have many benefits. This is probably a good default for juniors, interns, early projects.

                                                                  It’s hard to have a pithy title that isn’t click bait but still represents my chunking/motto. I’m not dunking on databases. I’m just saying it’s really weird. This is a great way to explain this to interns and juniors. We have to be CAREFUL because of state etc.

                                                                  1. 5

                                                                    You are doing hard part. Writing a cohesive article, opening it up for criticism, collecting feedback and asking follow up questions. This type of process is an essential part of advancement, in any field.

                                                                    You are right about trade offs. There are databases, forgot which one, that recognize the different trade offs, and offering developers to specify trade off choices, on per connection bases.

                                                                    You are also right, that the decisions around data consistency, change propagation velocity, etc – have a significant impact (often deciding impact) on the overall architecture of a solution(s). Splitting out database solution engineering into a separate ‘isolated’ task – rarely works (it works for cookie cutter projects with little to no data volume and access patterns variability)

                                                                1. 3

                                                                  Only in COBOL

                                                                  Ada VLAs store their own bounds, are bounds checked and can be constrained in size by the index type size. If you overflow, you’ll get a Storage_Error. That seems safe to me.

                                                                  1. 1

                                                                    Yes, I also think this is reasonable. A well defined failure mode is a way for a runtime to announce an unsolveable condition.

                                                                    I believe, but may be mistaken, that Pascal and its derivatives like Modula-2 and may be Delphi did something similar.

                                                                    I wonder though if more could be done by a compiler or by a programmer.

                                                                    For example, if an executable or a shared object could communicate to OS, at program load time the maximum stack size it will need is XY, or other ‘capabilities’ (eg whether it uses less secure runtime calls, etc).

                                                                    This way an OS could examine it, compare it to say ‘ulimits’ and other constraints – and declare right at the load time, that a given executable or DLL will not work.

                                                                    Just in general, trying to move the potential run-time errors closer to the ‘execution start/load time’ , in my view, has a benefit of better user experience and would allow OS subsystems that deal with security, scheduling, memory management to be more ‘aware’ of the program’s intentions.

                                                                  1. 1

                                                                    very nice. A good number of use cases for this that I could think of:

                                                                    I tend to use different compilers/databases/OS targets to clean up and sharpen interfaces/installation procedures/dependencies/etc. Even if the particular commercial deliverables are targeting just one os/compiler/db combination.

                                                                    This is also an interesting platform for a ‘holistic’ benchmarking. Meaning that differences in benchmarks of the same program on the same hardware for Chimera vs, say, Linux may be, at least, directionally indicative were the particular bottlenecks are.

                                                                    Another tangential comparasing: Android is a type of a ‘chimera’ OS (linux core + a very different ‘user land’). So just the work that goes into Leah’s effort, fundamentally enables these kinds of separations, and could, some time in the future, enable projects that create ‘device centric’ userland while keeping core rooted in particular kernel

                                                                    1. 3

                                                                      not directly related to this module. But it has dependency on immutable.js

                                                                      At one point in time immutable.js was no longer actively maintained [1] (and I took some pains to switch over as I was using it very lightly, and it was quite big in size for our app’s kb budget).

                                                                      But it looks like somebody had picked up the maintenance, of this big, but otherwise wonderful library.

                                                                      [1] https://github.com/immutable-js/immutable-js/issues/1689

                                                                      1. 1

                                                                        List of tasks for 2021: https://ioi2021.sg/ioi-2021-tasks/

                                                                        Lists of tasks (problems) for each of the previous 32 years: https://ioinformatics.org/page/contests/10

                                                                        Hall of fame: https://stats.ioinformatics.org/halloffame/

                                                                        1. 1

                                                                          Thanks for sharing.

                                                                          Would be interested to know why Redux vs React.Context (in my case, by the time I needed the app-scope state sharing, React.Context was already available, so I never got a chance leverage Redux).

                                                                          I also use ReactRouter, primarily because it has integration for React-native. It is elegant, in my view. For more complex styling I use React-Bootstrap.

                                                                          But overall, I also tend to pick components that can work with React Native and React Native web – so that I can reduce my ‘cognitive load’ required to maintain web and mobile end-user apps for the same system.

                                                                          May be this is outside of your scope, but would be interesting to understand if the proposed directory structure can be extended to handle code sharing between react native (mobile) and web apps (eg sharing some navigation logic, forms, and having different styling). In my experience, this is somewhat ‘non-obvious’ (but I also conceded that I might have not solved it in the best way).

                                                                          1. 3

                                                                            This is the state of the art.

                                                                            I find it amazing how well abstracted the capability idea is and how it fits into the system, visible in the system composer GUI.

                                                                            1. 2

                                                                              Indeed, it seems like they are getting ready to offer a ‘ROD’ (Rapid OS Development, I derived it from the old term RAD – Rapid Application Development) :-).

                                                                              I can imagine a ‘wizard’ like:

                                                                              • Do you want a Mobile OS ? - Yes
                                                                              • Pick mobile base band: LTE
                                                                              • Do you want an AppStore in your OS: Yes
                                                                              • Do you want Bluetooth support: Yes
                                                                              • Do you want ScreenShare with Windows: Yes
                                                                              • Touch Screen or Keyboard/Mouse? : Touch Screen

                                                                              and so on…

                                                                              Once you finished selecting options, you get the OS you ordered !

                                                                            1. 22

                                                                              I used to be a Gemini enthusiast before it hit me: if this is really a content-focused markup language, where the client gets to decide how to present the content, then why must I used ASCII art (or worse, a separate CSV file) to display an inline table? ASCII tables are terrible for screens that are either short or wide, don’t have line wrapping, don’t have column alignment, and have absolutely abysmal accessibility.

                                                                              Not to mention that typography styles that have existed for positively centuries (italics, bolding) aren’t included in the .gmi format.

                                                                              Is this really the price we have to pay to get a simpler document-focused version of the web? Did the Gemini creators really have to keep features off just to ensure that the goal of having a “basic but usable” client in less than 50 lines of code? 50 lines of code? Really?

                                                                              tl;dr Gemini took its goal of “simplicity” (for some definition of “simplicity”) way, way too far.

                                                                              1. 7

                                                                                There are quite a few things that interest me and that I write about, let’s take three: programming, math, music. Only the first one is possible on Gemini: I can post raw, un-highlighted code snippets. I cannot post formulas or sound and score snippets. So, I’m not using Gemini. ;)

                                                                                What if we can re-introduce the plugin system in a safe way? Can a hypothetical FireChrome browser VM automatically download a MusicXML snippet rendering plugin from my site when a user visits it and ask the user if they want to run it (license: GPL, signed by dmbaturin’s key, wants access to: current page). That’s something I’d be happy to use, something that gives me and visitors more opportunities, not less.

                                                                                1. 1

                                                                                  downloading, safely, – rendering plugins associated with the content, is certainly a possible approach. But perhaps, somewhat, challenging to build given the variety of the devices.

                                                                                  What if those rendering plugins that you mentioned above, in a way, will become part of the protocol that help to ‘convert’ the source content into something that best suited for the target device?

                                                                                  Given, that any viewing device that hosts a browser, already runs a sophisticated OS with windowing systems, layout managers, etc – why not leverage that, instead of building those into a browser.

                                                                                  So that the transplier in the protocol (at the source) – knows how to convert the ‘markup’, ‘layout’ and ‘accessibility’ definitions of the source onto the destination rendering subsystem (that, underneath leverages the target OS’s UI subsystem).

                                                                                  The transplier underneath, would leverage a number of plugins, that understand the source and know how to transfer it to the representation, consumable by the target device.

                                                                                  I realize that a web browser does that in a way. But, may be, separating the markup/layout/accessibility converters out of the browser and into a ‘protocol’, would simplify creating web browsers (since the approach would leverage OS native UI framework), while maintaining functional capabilities of richer rendering models.

                                                                                  With the same approach, for the ‘active’ part of the content (that is JS) – may be it should be transplied into the ‘actionable’ language of the target OS (rather than into a ‘webassembly’ – unless the target OS asks for ‘webassembly’)

                                                                                  The audio streaming works this way – the source content gets translated into a representation that’s best suited for a given client device , before sending it it over.

                                                                                  1. 3

                                                                                    I remembered that about half a year ago I made a rough draft of a spec for a “post-web” system that adopts good ideas from Gopher and forgotten web technologies and extends them. In particular, it has machine-readable resource maps and the markup language is extensible with plugins that resemble “user scripts on steroids”—element tree transformers.

                                                                                    If anyone is interested to read it: https://gist.github.com/dmbaturin/211e1a8a7e69ea1899f98f3b2010c7c3

                                                                                    1. 1

                                                                                      Is this inspired by “indieweb” inclusion of specific (X)HTML snippets to denote stuff like addresses, friend relationships etc? Edit they’re called microformats.

                                                                                      I first came in contact with this (well, RDF) in the early 2000s (c.f. https://en.wikipedia.org/wiki/FOAF_(ontology)) and I’ve seen it flare up now and again since then.

                                                                                      1. 1

                                                                                        That, but also the idea that XHTML would provide extensible DTDs, that was never used in practice.

                                                                                      2. 1

                                                                                        I like the “foreign transports” section, clever. I’d like to see this developed more.

                                                                                        1. 1

                                                                                          I can make an actual repository for that, if you want to make pull requests or have an extended discussion.

                                                                                          1. 1

                                                                                            Please do!

                                                                                              1. 1

                                                                                                Awesome thanks, I’ll check it out in the AM!