1.  

    OK, OK, great, nice, really helpful.

    But all of those tutorials about desktopping on *BSD lack a single convincing point, which I don’t need (I use OpenBSD on desktop more or less actively) but others would appreciate:

    How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things? I don’t want to deprecate or make it feel worse in any way, just looking for some points or features which can be nice for people using some mainstream Linuxes (Ubuntu, RHEL, CentOS, Fedora) on they work/private machines just to click things?

    The only thing like that I’ve seen was “OpenBSD is not for you if…” paragraph in OpenBSD desktop practives howto. But it’s actually an opposite for what I’m looking for :)

    1. 5

      How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things?

      I think we need to find a difference between a ‘desktop’ term for regular people (not IT related) and a ‘desktop’ term for technical IT people.

      My guide is definitely for the second group, such FreeBSD Desktop is not suited for a regular user, the NomadBSD may be suited that way, the TrueOS Desktop may be suited that way but definitely such ‘custom’ setup.

      I am sharing this knowledge as I use FreeBSD on the ‘desktop’ since 15 years and when I wanted to have FreeBSD desktop it was not such easy task as it is now, but still requires some configuration and that I wanted to share.

      Is CentOS/RHEL better suited for the ‘desktop’ then FreeBSD? Depends, Linux has the advantage here that a lot of software out of the box supports these distributions, yet when you compare the freshness and count of packages between these system families its on the FreeBSD side - https://repology.org/statistics/newest - you have to configure many additional repositories with CentOS/RHEL like EPEL and on FreeBSD you just type pkg install so its more friendly here.

      CentOS/RHEL has graphical installer on which You can select to install X11 desktop which is easier for less advanced users, that is the CentOS/RHEL advantage over FreeBSD, but when we compare it that way, OpenIndiana based Illumos distribution is even easier to use and install then CentOS/RHEL as its installer is more easy then the CentOS/RHEL one ;)

      So its a long discussion without end really :>

      1.  

        How such BSD desktop solution would be appealing for some casual Ubuntu user who just clicks “ok” button and gets on with things?

        The real selling point is “fearless upgrades”. Pushing the upgrade button in Ubuntu feels like russian roulette, you never know what’s going to break this time.

        ZFS is nice - RAID-like resilience, LVM-like convenience, and filesystem snapshotting for history/“undo” for the same amount of admin effort it would take to set up one of those things on Linux - but the biggest feature of BSD for me is more of an anti-feature: they just don’t keep randomly breaking everything.

        1.  

          The real selling point is “fearless upgrades”. Pushing the upgrade button in Ubuntu feels like russian roulette, you never know what’s going to break this time.

          A somewhat relevant data point: the Fedora folks have been working for a while on atomic workstation, now Team Silverblue. It uses OSTree for atomic updates/downgrades. You pretty much boot in an OS version, similarly to FreeBSD boot environments (of course, the implementation is very different). The idea is to use Flatpak for installing applications, though you can still layer RPMs with rpm-ostree.

          Although it is probably not a solution for a tech user’s desktop. It seems interesting for the ‘average’ user in that it provides updates that don’t fail when yanking out the plug in the middle of an update and offers rollbacks. The OS itself is immutable (which protects against certain kinds of malware) and applications are sandboxed in by Flatpak.

          ZFS is nice - RAID-like resilience, LVM-like convenience, and filesystem snapshotting for history/“undo” for the same amount of admin effort it would take to set up one of those things on Linux

          Ubuntu also supports ZFS out of the box. With some work, you can also do ZFS on root.

          but the biggest feature of BSD for me is more of an anti-feature: they just don’t keep randomly breaking everything.

          I think this is the biggest selling point for BSD. I have given up on Ubuntu for my personal machines a long time ago. Stuff breaks all the time and Ubuntu/Debian/etc. are so opaque that it takes a long time to get to the bottom of a problem. Arch Linux is a reasonable compromise, stuff breaks sometimes due to it being a rolling release, but at least it’s fairly clear where to look. Moreover, the turnaround time of submitting reports/patches upstream and trickling down to Arch is pretty short.

          But I would switch back to BSD in a heartbeat if there was good out-of-the-box support for amdgpu, Intel MKL, CUDA, etc. But apparently (haven’t verified) the Linux amdgpu tree has more lines of code than the OpenBSD kernel.

          1.  

            I order to be able to easily undelete files I’ve setup zrepl to snapshot my system every 15 minutes. I have these snapshots expired after a while. In combination with boot environments this means I can mess with my system without having to worry about breaking it. I can simply reset it quickly and easily. This is very convenient.

        1. 5

          Yawn. The answers are predictable (Linux tries to emulate Windows, Linux driver quality is bad), often incorrect (FreeBSD on mainframes… right) and not very insightful. FreeBSD currently has virtually no desktop market share compared to Windows, macOS, and Linux, because:

          • They have better support from software and hardware vendors.
          • They are easier to install and configure for the average person.
          • They are easier to use for the average person.
          • Most people don’t even know FreeBSD and don’t care about their OS.
          • Inertia.

          Of course, the more interesting question is why Linux became more popular than FreeBSD, despite FreeBSD having a more friendly license for commercial/proprietary use.

          1. 8

            Of course, the more interesting question is why Linux became more popular than FreeBSD, despite FreeBSD having a more friendly license for commercial/proprietary use.

            I think there’s a better answer to that on Server Fault. UC Berkeley was fighting off a lawsuit from AT&T over BSD, and by the time all of that was resolved Linux had already gotten off the ground and achieved sufficient popularity that the SCO lawsuit couldn’t stop its momentum.

            1. 2

              This is often used as one of the explanations. I am sure that it is one of the factors, but the lawsuit was already settled in 1994. I remember buying a FreeBSD 2.1.5 CD set in 1996, long after the lawsuit was settled. In 1996 Linux was still very primitive and a hobbyist thing. Slackware still reigned, SuSE had just moved from Slackware to Jurix as its base, RPM did not even exist yet. I was surprised at the time how much better FreeBSD was - technically, it’s ports collection, the documentation, etc. Also, FreeBSD and BSD/OS were still much more popular on ‘serious’ servers at the time.

              I think there are other important (internal) factors. E.g., the development model (outside OpenBSD) favored long-running stable branches and only branching from -current every 2-4 years, whereas Linux distributions were always pushing the latest (except uneven kernel versions), allowing Linux to surpass the BSDs in driver support, etc. Also, the Linux distributions at the time already focused on a wider user base, e.g. Caldera and others had graphical installers near the end of the nineties. And due to many distributions being commercial, they had more incentive pushing Linux boxes to stores and do marketing. E.g., local book stores in The Netherlands would carry Red Hat, SUSE, etc.

              1. 1

                Good points here.

                1. 1

                  the development model (outside OpenBSD) favored long-running stable branches and only branching from -current every 2-4 years, whereas Linux distributions were always pushing the latest (except uneven kernel versions), allowing Linux to surpass the BSDs in driver support, etc.

                  Both Richard Gabriel’s Worse is Better and entrepreneurs’ highlighting execution over ideas/quality show that this strategy by itself could cause a lot of the momentum of Linux. Also, Caldera was the first one I used since I could buy a CD with graphical installer at Best Buy for $20.

              2. 1

                Citation needed. ;)

                While I see where you are getting to I think those are at least partly myths. I have yet to see a person who can use Linux on the desktop on their own and cannot use FreeBSD or OpenBSD.

                While I hear these arguments over and over I just don’t see them mapping to the real life. When people start using OpenBSD or FreeBSD they usually end up thinking it would be a lot harder, because of these myths.

                Now I don’t wanna say they are easy to install, but if you want to use one of these systems for day to day life, they are certainly more friendly then Debian and Arch Linux for example. About others one might argue, but really, the first half of the Windows install was about as hard as installing either Linux (aside from Gentoo, Arch, etc.) until fairly recently. I really do thing that the effect of the initial installation is overrated.

                What is a bigger problem of course is support for recent hardware. Looking at how far FreeBSD lags behind with Intel graphics (OpenBSD and DragonFly do way better here) or its sometimes desktop-unfriendly defaults (changing a sysctl to make Chrome run correctly) are bigger issues. One can be mitigated by using a recent Apple laptop or an older generation Thinkpad, the other by using a “distribution”.

                I think a big reason is that all the commercial interest in something for end users was around Playstations. There is no SuSE, no RedHat, no Ubuntu, all having at least some money and grip to push their systems to the desktop, and be it just to get future sysadmins, selling their products for them.

                Right now I think the comparison that would make more sense is Arch Linux vs FreeBSD, simply because these are a lot more similar, than Ubuntu which won by a huge initial investment in tech, branding and marketing, more than anything.

                Arch Linux and FreeBSD have a lot more in common - speaking purely about desktop. They have a somewhat technical users in mind, they are not backed by some big organization, they value certain forms of simplicity (not exactly the one OpenBSD is thinking about, but still), they both have huge repositories of easy to install and very up to date software, that can be either taken from packages or source, they like to tune, configure and optimize, they enjoy having packages close to upstream, etc.

                My best guess here would be stuff like steam and other things that became available as Linux blobs, that were “made possible” due to the investments from various other companies, which started out in the B2B field. Now one might ask why the BSDs don’t have strong companies in that sector, but at least to me it seems that the idea of using BSD outside of networks (routers, servers, etc.) infrastructure and the need of having something GPL-free (gaming consoles, etc.) just never occurred to people until Linux did lift off.

                The reasons to use BSD often were a lot more more pragmatic and there weren’t really people with that dream of one day replacing Windows, in which Linux so far succeeded on the phone, but more in a way that one could say it’s Linux + lots of BSD code and macOS and iOS are BSD after all.

                Even though BSD people often don’t want to hear that, but the license might be a part, especially on the hardware support side and you simply have code flowing in, for mostly legal reasons that one at least can look at.

                I am sure that’s not the only reason and of course it will be a mixture, but the person running Linux on their own free will likely won’t decide against FreeBSD because it looks text based, especially not your average Arch Linux, Gentoo, Debian, Slackware. They might even find it more convenient.

                Knowledge is especially historically a huge factor. I haven’t heard about BSD at all before the day I first installed it in 2005, which I think was because I read that Gentoo’s portage was inspired by it.

                Extremely subjective, but Linux seems to really have a lot more missionary stuff going on. I have met more than one person who was about to duck and cover when I mentioned Linux, fearing a speech about moral and technical reasons on why they should switch. This used to be worse though. I think with the growth of the Linux community people feel a lot less like they have to defend their decision. There are barely any flame wars about Windows vs Linux vs macOS these days.

                So I think marketing and in general network and social effects, as well as a hype and a nice story together with quite a bit of ideological undertone make up a large portion, of the history leading to status quo. I know a few people that tried BSD liked it and only switched back for ideological reasons.

                While the BSDs are certainly not easy to use compared to macOS or Windows, I think argument is not holding true as big driving factor at all, when comparing to Linux in general and longer term desktop usage. Even holding on to Ubuntu for an extended period of time (upgrading from one release to another) will require a similar level of interest.

                1. 1

                  I think a lot of it may also have had to do with GCC being so popular, and the push from the GNU folks towards Linux (at least “until Hurd is ready”). Combine that with Linux often being positioned as Anti-Windows by users (I remember a /lot/ of zealous propaganda back in the day), it certainly started to pick up mindshare quickly on college campuses in the mid 90s.

              1. 1

                As I said elsewhere, to turn AI into a serious field we should start by fixing its parlance.

                How could non-engineers understand what we are really talking about, if we use terms like “learning rate” and “training” that are strongly bound to their own personal experience?

                We are fooling them (and often ourselves) with such an anthropomorphic language.

                1. 6

                  I am not sure how this is related to the article. The article seems to criticize (though very vaguely):

                  • That the large number of parameters and non-linearities make the models uninterpretable (black boxes). Techniques like ablation can make models more interpretable.
                  • That the optimization problem is not convex/concave, with a lot of local optima, saddle points, etc. and that the field has not come up with a better optimization method than than variations of SGD.
                  • That a subset of machine learning practitioners do not understand the underlying theory and as a result cargo-cult everything from non-linearities, learning rate schedules, to attention mechanisms, etc.

                  It is also strange that the article suggests that machine learning is alchemy, since many machine learning algorithms are well-understood (e.g. logistic regression, linear SVMs, KNN classification).

                  The issue that you raise, if we use terms like “learning rate” and “training” that are strongly bound to their own personal experience, is more related to explaining machine learning to people outside the field and newcomers. For anyone who has done an introductory course to machine learning, learning rate has a mathematical definition.

                  Also, anthropomorphic language is used in a lot of technical fields. We are talking about mothers/daughters or parent/children when talking about trees, sink/swim for restoring the heap property in heaps, collisions in hash tables, semaphores in concurrency, etc. We do treat data structures and algorithms research as serious fields.

                  1. 6

                    I am not sure how this is related to the article.

                    The point of my talk at the seminar was that the language we use is “Good for literature and thus Bad for Science” (slide 39). After the talk one of the listener told me: “Yeah, current AI hype make it seems like we are alchemists, playing with something we do not really understand and without caring much about ethics”.
                    Note that it was before that Uber-driven and the Tesla-driven cars killed anyone.

                    The language we use forges our understanding of the world: we think with that language, our brain establishes relations through it even when we do not consciously express them.

                    Researchers that call a ANN as intelligent, fool themselves.

                    Also, anthropomorphic language is used in a lot of technical fields. We are talking about mothers/daughters or parent/children when talking about trees, sink/swim for restoring the heap property in heaps, collisions in hash tables, semaphores in concurrency, etc. We do treat data structures and algorithms research as serious fields.

                    The difference is that there is a clear structural relation between the parent/children metaphore in, say, processes and the experience people have about it, or about collisions, or about semaphores…

                    On the contrary, there is no evident structural relation between the computation of an ANN and an intelligence. It’s just an approximation of a function. The fact that it can approximate functions that we do not know just means that we cannot trust its outputs.

                    In other terms, if you cannot establish if a software is correct, it’s not even broken! It’s… a toy? :-D

                    Selling such unexplainable software as “intelligent”, is one of the most stupid error we are doing this decade.

                    1. 3

                      The difference is that there is a clear structural relation between the parent/children metaphore in, say, processes and the experience people have about it, or about collisions, or about semaphores…

                      On the contrary, there is no evident structural relation between the computation of an ANN and an intelligence. It’s just an approximation of a function. The fact that it can approximate functions that we do not know just means that we cannot trust its outputs.

                      You start from a hyperbolic description of machine learning (intelligence) that most practitioners in the field would never use or agree with.

                      I work in computational linguistics and deep learning has taken over our field in the last 5-10 years. However, neither I, nor one of my colleagues, would call their models ‘intelligent’. It is abundantly clear what happens in most networks - e.g. in classification the last layer is typically a softmax, which is more-or-less off-the-shelf logistic regression. The non-linear layers are used to transform the input space such that a problem that is not linearly separable becomes linearly separable. Nobody would dare to call that intelligence.

                      Outside the hyperbole, a lot of anthropomorphic terms are quite intuitive. An attention layer divides the network’s attention over inputs. Adversarial training uses examples specifically crafted to fool the model. A forget gate determines how much information the model should retain from a previous time step.

                      The outlandish hyperbolic claims, such as `ANN is intelligent’ come from a small subset of people who want VC money, more funding, or whatever. Or they are Google, Facebook, etc. and do it for publicity. Unfortunately, some well-known practitioners from the field have made such claims, but I suspect that in many cases there are ulterior motives. But you are making a caricature of the field in general, I would recommend you to attend some conferences on computational linguistics (ACL, EMNLP), information retrieval, machine vision, etc. And you will see that most practitioners actually live outside the hype bubble with its outlandish claims and would definitely not compare machine learning to human learning.

                      1. 3

                        I upvoted your answer because, while I don’t think you understood what I mean (my fault), I can understand your point of view.

                        Still let me use your response to explain it in a better way and with some example.

                        You start from a hyperbolic description of machine learning

                        Learning in itself is a term strongly tight to people’ experience. People learn themselves.
                        They tend to appreciate who is able learn and consider her smart.
                        Anything that can learn (a cat, a dog…) is qualified as intelligent.

                        But machines do not learn.
                        You are just approximating a function by calibrating the weights in a long chain of logistic regressions (or whatever).

                        You, just like most researchers and people, get the impression that the network is learning just because with a better calibration you get a better approximation of the target function. As I said at the University of Milan, this is pretty similar to what happened to people facing “L’Arrivée d’un train en gare de La Ciotat” for the first time.

                        And the term “learning” is helping in this illusion. If you use “calibration” the illusion instantly disappears.

                        An attention layer divides the network’s attention over inputs.

                        This say nothing practical about what the “attention layer” do.

                        If I say “the parent process has killed the child process”, anyone understand that the child process was somehow generated by the parent process and that the child process had something in common with the parent one. They also understand that the child cannot react anymore to input, that it is dead.

                        Your phrase can at most describe the goal of the attention layer, but what it actually does to input?
                        Also, you need to apply your insight of “intelligence” to get an insight of what “network’s attention” is.

                        Adversarial training uses examples specifically crafted to fool the model.

                        I’d say that examples designed (more properly, approximated) to maximize the error of the approximation computed by the ANN are used to minimize such error.

                        Again to turn this into an insight along the line of “Adversarial training uses examples specifically crafted to fool the model” you need

                        1. to apply your insight of an intelligence that can be fooled
                        2. to apply your insight of adversary
                        3. to apply your insight of training

                        Neither 1, 2 or 3 provide useful info about the mechanics what is happening.

                        A forget gate determines how much information the model should retain from a previous time step.

                        Again, “forget” assume someone who remember.
                        A set of weight in a graph is not a memory. It’s just a set of weight.
                        Memory is your interpretation of such weights.

                        I guess you could easily find more descriptive and less anthropomorphic names for all these things.

                        But, they forged your mind so much that even if you know that there is no intelligence in a ANN, even if you know that it’s not intelligent at all, any external observer (including your peers) will listen your talk in terms that assume an intelligence. Those that listen you will get an insight of what you are talking about only if they assume an intelligence in the machine.

                        Indeed at the very beginning, you speak about “Artificial Intelligence”. About “Machine Learning”.

                        I agree: it’s a very hyperbolic description of the status of the field. ;-)

                1. 45

                  Your original 600MB tracking pixel was a pain in the ass, I’m sure everybody who accessed the website using limited cellular data is very fond of you.

                  1. 16

                    Especially during a weekend where many (at least Europeans) were traveling.

                    Also keep in mind that a lot of people outside the US (i.e. Europe) are on 500MB/1GB plans, so you burn through that with LTE very quickly. And not everyone can afford to buy extra data. This is extremely rude.

                    1. 4

                      Well even as an american, I accessed this site and that page over my cellular connection. Sigh, even with a 10GiB monthly cap that probably used up a ton of data pointlessly.

                    2. 3

                      … I was wondering how I ran out of data 4 days early this month.

                      1. 3

                        I, well, believe it should count as part of the experience.

                        1. 2

                          You don’t RSS your mobile feeds?

                          1. 3

                            I haven’t used RSS since like 2009.

                            1. 6

                              But it was 2002, atleast according to the post dates!

                              1. 4

                                We have https, too, to make sure those dates weren’t tampered with in transit.

                        1. 12

                          Google has appealed again, but if the court decides in favor of Oracle, basically any open source software that mimics the API of something closed-source will be in trouble. Linux, various GNU libraries, and so on, all implement numerous classic Unix interfaces. It would render essentially the entire Linux ecosystem (and likely most other open source operating systems) as infringing.

                          1. 10

                            Not necessarily. Remember that the EU has specifically written into their copyright law that APIs (and anything needed for interoperability) are not subject to any copyright protection (This, incidentally, is why EU law generally considers GPL and LGPL identical, as the linking restriction would violate this rule, and this is also why the EUPL 1.2 is compatible with MPL, LGPL, GPL, and more).

                            So projects such as VLC (and with that, ffmpeg) which are legally in the EU, or the entire KDE suite, aren’t affected.

                            This was also used by LibreOffice/OpenOffice/StarOffice when it was originally created by reverse engineering the Microsoft Office formats, because StarOffice was developed in Germany, and also didn’t have to worry about US copyright law.

                            If I recall correctly (but don’t quote me on this), some parts of the WINE project were also based on reverse engineered Windows code that was reversed in the EU for similar reasons, but for compliance with US law, these parts were later replaced by untainted code.

                            1. 1

                              Are you a lawyer? I am not seeing a precedent that the GPL’s strong copyleft is untenable in the EU. For example, the VMWare case seems to rely on the copyleft of the GPL in a way that the LGPL would not have, right?

                              The strong copyleft of the GPL in the US does not rely on copyrightability of APIs, I have heard people say.

                              This FOSDEM talk also suggests that the GPL is quite strong.

                              Clean-room reverse engineering is known to be legally safe in the US. We rely on this for GNU Octave, since we are implementing the Matlab language.

                              (I get a general air of snobbery from Europeans about how stupid US law is and how enlightened and less onerous European law is, but I don’t think this is generally true.)

                              1. 2

                                Clean-room reverse engineering is known to be legally safe in the US.

                                Making a compatible product through reverse engineering is exactly the kind of thing the API ruling makes me worry about. It seems like it was intended to bolster profitable lock-in in the long-term.

                                1. 1

                                  I am not a lawyer, but the authors of the EUPL (who are lawyers) have written about this situation, and declared that they believe it is the case.

                                  1. 1

                                    I was looking for this interpretation from the EUPL authors, but couldn’t find it. Can you show it to me?

                                    1. 1

                                      See https://joinup.ec.europa.eu/page/eupl-compatible-open-source-licences#section-3 and https://joinup.ec.europa.eu/page/eupl-compatible-open-source-licences#section-4

                                      In Europe the recitals 10 and 15 of the Directive 2009/24/EC on the protection of computer programs seems to invalidate the idea of “strong copyleft”: any portion of code that is strictly necessary for implementing interoperability can be reproduced without copyright infringement. This means that linking cannot be submitted to conditions or restricted by a so-called “strong copyleft licence”. As a consequence, linking two programs does not produce a single derivative of both (each program stay covered by its primary licence). Therefore the EUPL v1.1 and v1.2 are copyleft (or share-alike) for protecting the covered software only from exclusive appropriation, but it is without pretention for any “viral extension” to other software in case of linking.

                                      1. 1

                                        Thanks! They seem to be arguing that interoperability allows you to use an API, but they seem cautious to assert this unequivocally (“it seems that”). It looks like the actual case is still untested in courts, both in the EU and the US, and I don’t know of anyone who has tried to challenge it by linking to GPLed programs in Europe and not releasing their own code under a compatible license.

                                        I would be cautious, therefore, to assert that this means the LGPL and the GPL are equivalent in Europe (I am also not a lawyer), unless it becomes a well-established legal consensus there, which doesn’t seem to be the case so far.

                              2. 4

                                It will definitely be messy, but (IANAL) Caldera (SCO) released ancient UNIX under a BSD license [1]. Since the BSD license and the GPL are compatible, wouldn’t it simply be a matter of adding the Caldera copyright notice/license?

                                Also, there is the deal between USL and BSDi/University of California that basically cleared BSD-Lite. So, there is the similar option there to make APIs BSD-derived?

                                Of course, System V-based APIs may be a different story altogether.

                                [1] http://www.lemis.com/grog/UNIX/

                                [2] https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc._v._Berkeley_Software_Design,_Inc.#Terms_of_the_settlement

                                1. 2

                                  This case is one reason Ive promoted Wirth’s languages for mobile a lot. Could be one of the only things that’s memory- and legally-safe. ;)

                                  1. 9

                                    A Wirth language is not very useful if it can’t use any system APIs, because they are all in copyright violation ;).

                                    1. 1

                                      I meant with their own API’s. You’d do this at the start of Android.

                                  2. 1

                                    Aren’t most if not all of those ‘classic unix interfaces’ defined somewhere in the POSIX standard?

                                  1. 24

                                    MISRA (the automotive applications standard) specifically requires single-exit point functions. While refactoring some code to satisfy this requirement, I found a couple of bugs related to releasing resources before returning in some rarely taken code paths. With a single return point, we moved the resource release to just before the return. https://spin.atomicobject.com/2011/07/26/in-defence-of-misra/ provides another counterpoint though it wasn’t convincing when I read it the first time.

                                    1. 8

                                      This is probably more relevant for non-GC languages. Otherwise, using labels and goto would work even better!

                                      1. 2

                                        Maybe even for assembly, where before returning you must manually ensure stack pointer is in right place and registers are restored. In this case, there’s more chances to introduce bugs if there are multiple returns (and it might be harder for disassembly when debugging embedded code).

                                        1. 1

                                          In some sense this is really just playing games with semantics. You still have multiple points of return in your function… just not multiple literal RET instructions. Semantically the upshot is that you have multiple points of return but also a convention for a user-defined function postamble. Which makes sense, of course.

                                        2. 2

                                          Sure, but we do still see labels and gotos working quite well under certain circumstances. :)

                                          For me, I like single-exit-point functions because they’re a bit easier to instrument for debugging, and because I’ve had many time where missing a return caused some other code to execute that wasn’t expected–with this style, you’re already in a tracing mindset.

                                          Maybe the biggest complaint I have is that if you properly factor these then you tend towards a bunch of nested functions checking conditions.

                                          1. 2

                                            Remember the big picture when focusing on a small, specific issue. The use of labels and goto might help for this problem. It also might throw off automated, analysis tools looking for other problems. These mismatches between what humans and machines understand is why I wanted real, analyzable macros for systems languages. I had one for error handling a long time ago that looked clean in code but generated the tedious, boring form that machines handle well.

                                            I’m sure there’s more to be gleaned using that method. Even the formal methodists are trying it now with “natural” theorem provers that hide the mechanical stuff a bit.

                                            1. 2

                                              Yes, definitely – I think in general if we were able to create abstractions from within the language directly to denote these specific patterns (in that case, early exits), we gain on all levels: clarity, efficiency and the ability to update the tools to support it. Macros and meta-programming are definitely much better options – or maybe something like an ability to easily script compiler passes and include the scripts as part of the build process, which would push the idea of meta-programming one step further.

                                          2. 5

                                            I have mixed feelings about this. I think in an embedded environment it makes sense because cleaning up resources is so important. But the example presented in that article is awful. The “simpler” example isn’t actually simpler (and it’s actually different).

                                            Overall, I’ve only ever found that forcing a single return in a function often makes the code harder to read. You end up setting and checking state all of the time. Those who say (and I don’t think you’re doing this here) that you should use a single return because MISRA C does it seem to ignore the fact that there are specific restrictions in the world MISRA is targetting.

                                            1. 4

                                              Golang gets around this with defer though that can incur some overhead.

                                              1. 8

                                                C++, Rust, etc. have destructors, which do the work for you automatically (the destructor/drop gets called when a value goes out of scope).

                                                1. 1

                                                  Destructors tie you to using objects, instead of just calling a function. It also makes cleanup implicit vs. defer which is more explicit.

                                                  The golang authors could have implemented constructors and destructors but generally the philosophy is make the zero value useful, and don’t add to the runtime where you could just call a function.

                                                2. 4

                                                  defer can be accidentally forgotten, while working around RAII / scoped resource usage in Rust or C++ is harder.

                                                3. 2

                                                  Firstly he doesn’t address early return from error condition at all.

                                                  And secondly his example of single return…

                                                  singleRet(){
                                                      int rt=0;
                                                      if(a){
                                                          if(b && c){
                                                              rt=2;
                                                          }else{
                                                              rt=1;
                                                          }
                                                      }
                                                      return rt;
                                                  }
                                                  

                                                  Should be simplified to…

                                                  a ? (b && c ? 2 : 1) : 0
                                                  
                                                  1. 1

                                                    Are you sure that wasn’t a result of having closely examined the control flow while refsctoring, rather than a positive of the specific form you normalised the control flow into? Plausibly you might have spotted the same bugs if you’d been changing it all into any other specific control flow format which involved not-quite-local changes?

                                                  1. 11

                                                    What I miss from that times — keyboard layout switching worked reliably and happened instantly. Now it causes loss of focus in current window, long delays (so after pressing “switch layout” keybinding, few typed characters are still in old layout), few keybinding choices such as only ctrl+shift and alt+shift. It’s painful now in all distros and I’m not sure if old functionality (built in into X server I think) can be still used.

                                                    Also both latest Gnome and KDE have terrible UI. Gnome tries to treat desktop as tablet computer and KDE has Vista-era shiny plastic look.

                                                    1. 8

                                                      This is why I’ve stuck with xfce for so long. I am a bit concerned now that xfce is going to gtk3, I fear it will end up more like gnome 3 than xfce.

                                                      1. 1

                                                        I think that’s unlikely. Most things are already ported to Gtk3 and they look exactly like they did on Gtk2.

                                                      2. 5

                                                        I don’t understand the hate for Gnome. When you critique Gnome’s UI are you comparing it to the high water mark, best desktop UI you’ve ever experienced, or the the latest iterations of macOS and Windows? Gnome isn’t developed in a vacuum, it is competing with the mainstream commercial desktop environments, which means compromises that negatively affect highly technical users, but results in a product that in some dimensions of UI may still be better than Windows and macOS, which is impressive IMO.

                                                        1. 6

                                                          I only dislike its desktop elements, mostly top menu, which consists of strange menu item in left corner and clock in center. There is too much unused space around clock. This is bad UI decision originally implemented on iPad, which has standard mobile phone status bar on top (it originates from “feature phones”, not even iPhone).

                                                          GTK3, however, is great (at least on Linux) and I like settings dialogs and Gnone apps.

                                                          1. 3

                                                            I appreciate that Gnome is at least trying to do something other than the “yet another Windows 95 clone” that the X11 world is fixated on. (unless it’s a tiling WM…. I wonder UX-oriented desktop oriented around tiling would be like…)

                                                            1. 2

                                                              I’m comparing it to Gnome 2 and XFCE, and it fails terribly in this regard.

                                                              XFCE is well-liked because it doesn’t try to abandon its user-base in favor of chasing some mass-adoption unicorns.

                                                              If mass-adoption of Linux on the desktop ever happens, it will not be caused by Gnome 3 displaying fewer options in their GUI.

                                                              1. 1

                                                                Ah I definitely felt similarly when moving from Gnome 2 to Gnome 3. Once I got used to Gnome 3 though, I forgave Gnome. The spotlight is better than macOS. The built-in tiling is good enough. The default Debian themes are classy. The animations are classy and smooth even with integrated graphics. It never freezes or crashes. The best part is all these batteries are included so Gnome requires very few user choices or customization.

                                                                1. 2

                                                                  I use GNOME 3 on my Linux machine, but I can’t say that I am happy. How do you live without menus? Or system tray icons (to e.g. Dropbox, Keybase)?

                                                                  I know that there are some extensions that bring these things back, but they tend to reduce stability of GNOME. And with Wayland bugs tend to crash gnome-shell/mutter and log you out of the session completely.

                                                                  I wonder who they are targeting when they are removing features that have been part of the WIMP paradigm for more than three decades? No one wants big innovation on the desktop, just provide a robust, predictable desktop environment that is up to date with the latest standards (Wayland, Vulkan rendering, etc.).

                                                                  (Of course, it’s their project and they can do whatever they want to do with it, I just don’t understand the philosophy.)

                                                                  1. 1

                                                                    I use spotlight for everything, it’s brilliant :/

                                                            2. 5

                                                              KDE has Vista-era shiny plastic look.

                                                              That’s much easier to solve (through the thousands of available themes) than this “Gnome tries to treat desktop as tablet computer”. For example, my own KDE setup looks like this: https://i.imgur.com/8eAze8v.png

                                                              1. 1

                                                                Does it crash often? Last time used it it crashed periodically (but that was few years ago).

                                                                1. 2

                                                                  I haven’t actually had a crash in over a year. It’s become much more stable in the past few months, no more flickering when adding/removing monitors quickly either.

                                                            1. 16

                                                              I fucking hate reCaptcha, partly because the problems seem to be getting harder over time. Sometimes I literally can’t spot the cars in all the tiles.

                                                              1. 19

                                                                It’s also very effective at keeping Tor out. ReCATPCHA will, more often than not, refuse to even serve a CAPTCHA (or serve an unsolveable one) to Tor users. Then remember that a lot of websites are behind CloudFlare and CloudFlare uses ReCAPTCHA to check users.

                                                                Oops.

                                                                1. 2

                                                                  For the Cloudflare issue you can install Cloudflare’s Privacy Pass extension that maintains anonymity, but still greatly reduces or removes the amount of reCaptchas Cloudflare shows you if you’re coming from an IP with bad reputation, such as a lot of the Tor exit nodes.

                                                                  (Disclaimer: I work at Cloudflare but in an unrelated department)

                                                                  1. 2

                                                                    Luckily, CloudFlare makes it easy for site owners to whitelist Tor so Tor users don’t get checked.

                                                                    1. 9

                                                                      Realistically, how many site owners do that, though?

                                                                  2. 16

                                                                    I don’t hate it because it’s hard. I hate it because I think Google lost its moral compass. So, the last thing that I want to do is to be a free annotator for their ML efforts. Unfortunately, I have to be a free annotator anyway, because some non-Google sites use reCaptcha.

                                                                    1. 7

                                                                      Indeed, also annoying is you have to guess at what the stupid thing is trying to indicate as “cars”. Is it a full image of the car or not? Does the “car” span multiple tiles? Is it obscured in one tile and not in another? Which of those “count” if so? Should I include all the tiles if say the front bumper is in one tile or not? (my experiments have indicated not).

                                                                      Or the store fronts, some don’t have any signage, they could be store fronts, or not, literally unknowable by a human or an AI with that limited of information.

                                                                      I’m sick of being used as a training set for AI data, this is even more annoying than trying to guess if the text in question was using Fraktur and the ligature in question is what google thinks is an f, or an s. I love getting told I’m wrong by a majority of people not being able to read Fraktur and distinguish an f from an s from say an italic i or l. Now I get to be told I can’t distinguish a “car” by an image training algorithm.

                                                                      1. 4

                                                                        At some point, only machines will be able to spot the cars.

                                                                      1. 4

                                                                        If I move off of OS X, it will be to Windows. For what I use my machine for, the applications simply aren’t there on any Unix other than OS X.

                                                                        1. 4

                                                                          I’ve been using Linux as my main desktop since about 3 years and used all of the major desktop environments. KDE Plasma looks good but either its file indexer (baloo) is taking hostage of one CPU core or the desktop crashes if you type specific words or too fast in the launcher, in short a horrible experience. I used Gnome for about a year and it was not much better, the plugins/extensions are often buggy and especially under wayland it crashes often and can’t restart like on X11, i.e. you loose all of your session state. Additionally, it feels laggy even on a beefed out machine (6 cores, latest gen. AMD GPU) because the compositor is single-threaded. GDM, gnome’s display manager, is also sluggish, runs since gnome 3.26 a process for each settings menu and starts a pulseaudio session which breaks bluetooth headset connections. Also unsuable for a productive environment in my opinion. Eventually I switched back to the desktop environment with what I started my Linux journey, namely XFCE with lightdm as a display manager. With compton as compositor it looks quite okay, is rock solid (in relation to the other DE I used) and everything feels snappy. As a note, I run all of the DEs on Arch Linux and I haven’t even talked about display scaling and multi-monitor usage, still a horror story.

                                                                          TL;DR The year of the Linux desktop is still far away in the future.

                                                                          1. 5

                                                                            I wouldn’t really know where to go. I have an Arch desktop at home (quad Xeon, 24 GB RAM, 2 SSDs), while the machine is much faster than my MacBook Pro, I usually end up using the MacBook Pro at home (and always at work), simply because there are no equivalents for me for applications like OmniGraffle, Pixelmator/Acorn, Microsoft Office (project proposals are usually floated in Word/Excel format with track changes), Beamer, etc. Also, both at work and home, AirPlay is the standard way to get things on large screens, etc.

                                                                            Also, despite what people are saying. The Linux desktop is still very buggy. E.g. I use GNOME on Wayland with the open amdgpu drivers on Arch (on X I can’t drive two screens with different DPIs). And half of the time GNOME does not even recover from simple things like switching the screen on/off (the display server crashes, HiDPI applications become blurry, or application windows simply disappear).

                                                                            Windows would probably have more useful applications for me than Linux or BSD (since many open source applications run fine on WSL). But my brain is just fundamentally incompatible with any non-unix.

                                                                            1. 8

                                                                              Linux has been my main desktop for 20 years or so? Although I am a software developer and either do not need the applications you mentioned or use alternatives.

                                                                              Anyway, what I actually wanted to say: on the hardware side I’ve had little issues with Linux, certainly not more than with Windows or OS X and at least with Linux (if I put the time into it) the issues can generally be fixed. I’ve been running multiple monitors for years and hibernation used to be a pain in the ass in the early 2000’s but has been good for me on a wide array of hardware for years (definitely better than both Windows and OS X which run on supported hardware!). Granted, I can’t blindly grab hardware off the shelf and have to do some research up front on supported hardware. But that’s what you get if hardware vendors do not officially support your OS and it does come with many upsides as well.

                                                                              I run pretty bare systems though and try to avoid Window’isms that bring short-term convenience but also bring additional complexity, so no systemd, pulseaudio, desktop environments like Gnome for me. Still, I’m running Linux because I want to be able to run Dropbox (actually pCloud in my case), Steam, etc.

                                                                              1. 4

                                                                                Linux has been my main desktop for 20 years or so?

                                                                                Different people, different requirements. I have used Linux and BSD on the desktop from 1994-2007. I work in a group where almost everybody uses Macs. I work in a university where most of the paperwork is done in Word (or PDF for some forms). I have a fair teaching load, so I could mess around for two hours to get a figure right in TikZ (which I sometimes do if I think it is worth the investment and have the time) or I could do it in five minutes in OmniGraffle and have more time to do research.

                                                                                It’s a set of trade-offs. Using a Mac saves a lot of time and reduces friction in my environment. In addition, one can pretty run much the same open source applications as on Linux per Homebrew.

                                                                                I do use Linux remotely every day, for deep learning and data processing, since it’s not possible to get a reasonable Mac to do that work.

                                                                                Anyway, what I actually wanted to say: on the hardware side I’ve had little issues with Linux, certainly not more than with Windows or OS X and at least with Linux (if I put the time into it) the issues can generally be fixed.

                                                                                The following anecdote is not data, but as a lecturer I see a lot of student presentations. Relatively frequently, students who run Linux on their laptops have problems getting projectors working with their laptops, often ending up borrowing a laptop from one of their colleagues. Whereas the Mac-wielding students often forget their {Mini DisplayPort, USB-C} -> VGA connectors, but have no problems otherwise.

                                                                            2. 2

                                                                              Same. I don’t use them every day, but I do need Adobe CS. I also want (from my desktop) solid support for many, many pixels of display output. Across multiple panels. And for this, Windows tends to be better than Mac these days.

                                                                              1. 1

                                                                                The Windows Linux Subsystem is also surprisingly good. I would say that it offers just enough for most OS X users to be happy. Linux users, maybe not.

                                                                              2. 1

                                                                                One thing I’m finding is that a lot of Mac apps I rely on have increasingly capable iOS counterparts (Things, OmniOutliner, Reeder, etc.) so I could potentially get away with not having desktop versions of those. That gets me closer to cutting my dependency on macOS, though there’s still a few apps that keep me around (Sketch, Pixelmator) and the ever-present requirement of having access to Xcode for iOS development.

                                                                              1. 22

                                                                                Ironically, the biggest thing that stops people from joining a Mastodon instance is the paradox of choice. If you want a Twitter account, there’s exactly one place to go and a newcomer has zero things to figure out before joining.

                                                                                If you want to join a Mastodon instance, you have to grok the distributed nature, figure out why some instances block other instances, which code of conduct you endorse (or pick an instance without one). All those choices create a higher barrier new users have to overcome to “get in”.

                                                                                1. 2

                                                                                  Ironically, the biggest thing that stops people from joining a Mastodon instance is the paradox of choice.

                                                                                  And network effects. I am not very active on Mastodon, since most friends and colleagues (computational linguistics, ML) are not on Mastodon.

                                                                                  I also think that the default user interface, even though it is nice for advanced users, is not simple enough.

                                                                                  1. 2

                                                                                    I think it largerly depends on how your interests match the instance you join.

                                                                                    I was invited to join mastodon.social social but I now realize that I mainly follow people from other instances.

                                                                                    Probably the fact that I’m mostly interested in software related matters (even if from a wide range of perpectives, including law and ethics) is what make the local timeline pretty boring to me…

                                                                                    Finding the right instance might not be simple.

                                                                                    Maybe a tag cloud representing the topics threated in the instance could help in the decision (together with the code of conduct obviously).

                                                                                  2. 2

                                                                                    I can see why you’d think this, but my experience has been that it really doesn’t matter, other than obvious stuff like not picking a fascist-friendly place. If you’re on a small instance then your local timeline will be quieter, but personally I found the majority of people to follow thru IRC or threads like this, so the local timeline didn’t really come into it.

                                                                                    1. 1

                                                                                      I have never depended heavily on the local or federated timeline for discoverability, but I joined during a wave of signups where lots of people I already knew on Twitter were already joining.

                                                                                      I imagine that, if the one person you know on the fediverse is also the person who told you about it, and that person is also a newbie or has mostly different interests, the local timeline matters a lot more. (And, if you’re reasonably ‘normie’ – if your strong interests aren’t geared toward witchcraft, furry fandoms, communism, and the internal politics of the FSF – you might have a really hard time finding an instance geared toward you anyway.)

                                                                                      I couldn’t be the one to do it, but I wonder if it would make sense to make a handful of sports-centric nodes. It would probably attract a lot of users.

                                                                                    2. 1

                                                                                      And so instead of taking the time to make informed choices, these users would rather delegate that responsibilty to a corporation which then makes all sorts of important choices for them….

                                                                                      1. 12

                                                                                        I think it’s a bit flippant to say that they don’t make an informed choice. Some people really do prioritize their time over other things.

                                                                                        1. 3

                                                                                          or they have no idea what advantages a decentralised system would provide, and completely overlook its existence

                                                                                          1. 5

                                                                                            or they don’t value the benefits the decentralised system provides, and consider the centralisation a pro.

                                                                                    1. 6

                                                                                      Debian certainly has some serious NIH optics (in my eyes anyway).

                                                                                      1. 2

                                                                                        Indeed. This is an advantage of shipping packages as closely to upstream as possible. Either upstream is active and fixes vulnerabilities or you drop the package. Also, if some package has a problem, there is a stronger incentive to work it out with upstream. And upstream is not badgered with distribution-specific bugs.

                                                                                      1. 9

                                                                                        This is a bold statement, I do quite a bit of ssh -X work, even thousands of miles distant from the server. I do very much wish ssh -X could forward sound somehow, but I certainly couldn’t live without X’s network transparency.

                                                                                        1. 6

                                                                                          Curious, what do you use it for? Every time I tried it, the experience was pain-stakingly slow.

                                                                                          1. 7

                                                                                            I find it okay for running things that aren’t fully interactive applications. For example I mainly run the terminal version of R on a remote server, but it’s nice that X’s network transparency means I can still do plot() and have a plot pop up.

                                                                                            1. 5

                                                                                              Have you tried SSH compression? I normally use ssh -YC.

                                                                                              1. 4

                                                                                                Compression can’t do anything about latency, and latency impacts X11 a lot since it’s an extremely chatty protocol.

                                                                                                1. 4

                                                                                                  There are some attempts to stick a caching proxy in the path to reduce the chattiness, since X11 is often chatty in pretty naive ways that ought to be fixable with a sufficiently protocol-aware caching server. I’ve heard good things about NX, but last time I tried to use it, the installation was messy.

                                                                                                  1. 1

                                                                                                    There’s a difference between latency (what you talk about) and speed (what I replied to). X11 mainly transfers an obscene amount of bitmaps.

                                                                                                    1. 1

                                                                                                      Both latency and bandwidth impact perceived speed.

                                                                                              2. 6

                                                                                                Seconded. Decades after, it’s still the best “remote desktop” experience out there.

                                                                                                1. 3

                                                                                                  I regularly use it when I am on a Mac and want to use some Linux-only software (primarily scientific software). Since the machines that I run it on are a few floors up or down, it works magnificently well. Of course, I could run a Linux desktop in a VM, but it is nicer having the applications directly on the Mac desktop.

                                                                                                  Unfortunately, Apple does not seem to care at all about XQuartz anymore (can’t sell it to the animoji crowd) and XQuartz on HiDPI is just a PITA. Moreover, there is a bug in Sierra/High Sierra where the location menu (you can’t make this up) steals the focus of XQuartz all the time:

                                                                                                  https://discussions.apple.com/thread/7964085

                                                                                                  So regretfully, X11 is out for me soon.

                                                                                                  1. 3

                                                                                                    Second. I have a Fibre connection at home. I’ve found X11 forwarding works great for a lot of simply GTK applications (EasyTag), file managers, etc.

                                                                                                    Running my IntelliJ IDE or Firefox over X11/openvpn was pretty painfully slow, and IntelliJ became buggy, but that might have just been OpenVPN. Locally within the same building, X11 forwarding worked fine.

                                                                                                    I’ve given Wayland/Weston a shot on my home theater PC with the xwayland module for backward compatibility. It works .. all right. Almost all my games work (humble/steam) thankfully, but I have very few native wayland applications. Kodi is still glitchy, and I know Weston is meant to just be a reference implementation, but it’s still kinda garbage. There also don’t appear to be any wayland display managers on Void Linux, so if I want to display a login screen, it has to start X, then switch to Wayland.

                                                                                                    I’ve seen the Wayland/X talk and I agree, X has a lot of garbage in it and we should move forward. At the same time, it’s still not ready for prime time. You can’t say, “Well you can implement RDP” or some other type of remote composition and then hand wave it away.

                                                                                                    I’ll probably give Wayland/Sway a try when I get my new laptop to see if it works better on Gentoo.

                                                                                                    1. 2

                                                                                                      No hand waving necessary, Weston does implement RDP :)

                                                                                                  1. 5

                                                                                                    Work:

                                                                                                    I have written and submitted some patches 1-2 weeks ago to the Rust Tensorflow bindings to make tensors, graphs, and ops Send + Sync. In the latter two cases, this was trivial, but for tensors this required a bit of work since tensors of types where C and Rust do not have the same representations are lazily unpacked. I didn’t want to replace Cell for interior mutability by RwLock, because it pollutes the API with lock guards. So, I opted for separating the representation for types where C/Rust types do/don’t have the same representation, so that tensors are at least Send + Sync for types where the representations match.

                                                                                                    Since the patches were accepted, I am now implementing simple servers for two (Tensorflow-using) natural language processing tools (after some colleagues requested that make them available in that way ;)).

                                                                                                    Besides that, since it’s exam week I am writing an exam for Wednesday and there’s a lot of correction work to do after that.

                                                                                                    Semi-work/semi-home:

                                                                                                    I have been packaging one application (a treebank search tool) as a Flatpak. Building the Flatpak was far less work than I expected. Thus far, I had been rolling Ubuntu and Arch packages. Building the Flatpak was far less work than the Ubuntu packages. Also, the application seems to work well with portals, since most file opening/saving goes through QFileDialog. I guess I am also benefitting from rewriting some sandbox-unfriendly code when sanboxing the macOS build.

                                                                                                    1. 2

                                                                                                      I thought some of you might have never heard of this concept. Here’s the PCI version I’m really focusing on. Thought it better to submit where it started then add that. The problem was the UNIX workstations, thought to be better in many ways, couldn’t run the PC software people were getting locked-in to. Instead of regular virtualization, Sun had the clever idea to straight-up put PC hardware in their UNIX boxes to run MS-DOS and then I think Windows apps. Early PS3’s did something similar keeping a PS2 Emotion Engine in them for emulation. I can’t recall others off top of head, though.

                                                                                                      The reason I’m posting this is that we’re currently trying to escape x86 hardware in favor of RISC-V and other stuff. My previous solution was two boxes in one chassis with KVM switch. That might be too much for some users. I figured this submission might give people ideas about using modern, card computers… which have usable specs… with FOSS workstations to run the legacy apps off the cards. The FOSS SoC, esp its IOMMU, might isolate them in shared RAM from the rest of the system. It would also run apps from the shared hard drive in a way that was mediated. There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC. It could even be a hardware component managed by trusted software if wanting a speed boost and possible reduction in attack surface.

                                                                                                      1. 3

                                                                                                        I think you might be misreading - the 386 in the Sun386i is the only CPU - there’s no SPARC, it runs an x86 Solaris with DOS VDMs provided by V86 mode on the 386.

                                                                                                        PC-on-cards were somewhat popular with Macs before software emulation in the late 90s got good enough.

                                                                                                        1. 1

                                                                                                          To be extra clear, this is what my comment is about. It’s a PCI card that runs x86 software alongside Solaris/SPARC. I found other one searching for it. If they’re unrelated, then my bad, thanks for tip, and lets explore the PCI card concept for x86 and RISC-V.

                                                                                                        2. 1

                                                                                                          There should also be software that can seemlessly move things into or out of the system but with trusted part running on FOSS SoC.

                                                                                                          I know it is proprietary, but wouldn’t the Apple T1 in the MacBook Pro Retina and T2 in the iMac Pro be good examples as well?

                                                                                                          https://en.wikipedia.org/wiki/Apple-designed_processors#Apple_T1

                                                                                                          tl;dr: T1/T2 is a separate ARM SoC that acts as a secure enclave and is a gatekeeper for the mic and Facetime camera.

                                                                                                        1. 5

                                                                                                          Home: finish up a small treebank viewer in Rust and gtk-rs. It is primarily for my own uses to quickly browse through CoNLL-X dependency treebanks and dump trees in the Graphviz dot and TikZ-dependency formats for teaching, papers, etc. Since this was my first project with gtk-rs, I was surprised at how complete the gtk-rs and related bindings are. What I mildly dislike is that I had to litter a lot of Rc<RefCell>s everywhere, which is not commonly the case in my daily Rust code. Also, communicating between a worker thread and a GTK main thread is kinda ugly [1]. Also, since Rust does not support inheritance, defining your own widgets is a bit cumbersome (typically, you wrap a Gtk+ widget and implement the Deref trait to provide the inner widget).

                                                                                                          Anyway, given all the constraints, I think the gtk-rs people have really done a nice job!

                                                                                                          Work: teaching (this week’s program: remainder of parse choices for dependency parsing and the implementation of CKY). Hopefully, I will have some time to work on a sequence to sequence learning project that I have started.

                                                                                                          [1] https://github.com/gtk-rs/examples/blob/30549d056f21eafba755f8084f3cb8a85e4f6736/src/bin/multithreading_context.rs#L50

                                                                                                          1. 1

                                                                                                            :D I filter the ‘javascript’ tag, but due to lucky circumstance I was not logged in. Thanks for the good laugh.

                                                                                                            1. 3

                                                                                                              Stability is often undervalued, especially when it comes to Desktop computers.

                                                                                                              I think stability is overvalued. I use FreeBSD -CURRENT, LineageOS snapshots, Firefox Nightly, LibreOffice beta, Weston master… and nothing is “buggy as hell”. Seems like developers just don’t write that many bugs these days :)

                                                                                                              1. 4

                                                                                                                I that case you must have been lucky. I remember using Arch some years ago, and then it “suddenly” gave up on me. Part of the reason was of course that there were incoherent contradiction between configuration files, some of them were my own fault but others were due to updates. And I really liked updating Arch every day, inspecting what was new, what was better. And it’s good for a while, in my experience, but if you don’t know what you’re doing, you’re too lazy to be minimalist or just don’t have the time and power to properly make sure everything is ok, it breaks. And it only gets worse, the more edge cases you have, the more non-standard setups you need and the more esoteric (or just unsupported/unpopular) hardware you have.

                                                                                                                I have similar experiences with Fedora and Debian unstable/testing, albeit to a lesser degree. Debian stable, with a few packages from testing, was a real “shock” in comparison to that, and while it was “boring” it was, at least for me, less stressful and removed a certain tension. I would have certainly learned less about fixing broken X11 setups or configuring init systems, if I had chosen it from the begining, but eventually it is nice to be able to relax.

                                                                                                                1. 3

                                                                                                                  I agree. My Linux desktop history from 1994 onwards was roughly Slackware -> Debian -> Ubuntu -> CentOS -> Debian -> Fedora -> Arch. I didn’t find more cutting-edge distributions such as Fedora or a rolling distribution such as Arch to be less stable than the conservative (Slackware) and stable distributions (Ubuntu LTS, CentOS, Debian).

                                                                                                                  Moreover, I found that Fedora and Arch receive fixes for bugs far more quickly. When you report bugs upstream and they are in Fedora/Arch in typically a few days/weeks, while in some conservative distributions it could take months or years. Besides that the hardware support is generally better. E.g., the amdgpu drivers work much better on my AMD FirePro the older radeon driver, but it might literally take years for amdgpu Southern/Sea Islands cards to land in stable distributions.

                                                                                                                1. 3

                                                                                                                  On Photos:

                                                                                                                  But in terms of acting like a good Mac app, it does not.

                                                                                                                  Not just Photos. Apple also replaced iWork on the Mac by a codebase that is shared with the iOS version. Years later, it is still missing many features and a poor imitation of its former self. I still regularly look for I feature I was sure that was there, but hasn’t survived the iOSification of iWork. For example, Pages was a great poor man’s DTP program. Unfortunately, the ‘upgrade’ removed linked text boxes and it took them four years to bring back this functionality.

                                                                                                                  If Marzipan is really realized, I think macOS apps will definitely become an afterthought, the iOS market is many times larger and its economically simply too attractive dump an iOS copy on MacOS. Electron apps have shown that many companies go for the lowest common denominator for economical benefit.

                                                                                                                  1. 11

                                                                                                                    … seriously? They told us Wayland would learn from the past and avoid the shitty hacks that made X11 “unmaintainable.” Yet it’s 2017, and GNOME under Wayland performs worse than under X, has more stuttering and tearing, sometimes randomly crashes when I connect an external monitor, and we are celebrating a new hack. This is insane.

                                                                                                                    1. 4

                                                                                                                      Interesting, what video card and driver? I dread every time I have to use X.org [1] because Wayland is so much smoother and has visibly less stuttering. I agree that there are/were too many random crashes, the initial versions of mutter/gnome-shell 2.26 would often fail assertions, etc. on monitor-related events. They patched many of the issues and it works fine now for me in the latest mutter in Arch ( 3.26.2+31+gbf91e2b4c-1).

                                                                                                                      Looking for the day I can switch to Sway though ;). Currently it scales up XWayland apps on HiDPI which makes them very blurry (GNOME doesn’t).

                                                                                                                      [1] E.g. the Parallels VM doesn’t emulate a GPU with KMS support and only has an X.org driver.

                                                                                                                      1. 3

                                                                                                                        Oh yeah, blurry Xwayland apps is also an issue in Weston. I’ve discussed this with Weston devs, and it’s a hard problem. Dealing with the X11 clients on HiDPI is a massive pain. Especially in a multi-monitor world.

                                                                                                                        Thankfully, more and more apps can run natively, including complex ones like Inkscape, LibreOffice and Darktable.

                                                                                                                        Firefox though… Wayland support is being developed here and it’s finally getting upstreamed. It’s almost usable… almost. GL does not work yet (only software rendering) and on HiDPI it’s pretty screwed up (screen does not refresh correctly when you type/click).

                                                                                                                      2. 4

                                                                                                                        From what I’ve heard, GNOME’s mutter is uhhh not a very good compositor. But even mutter should NOT have any tearing or stuttering. Something is going very wrong on your machine.

                                                                                                                        I use Weston git master on FreeBSD 12-CURRENT (so much supposed “unstable” stuff, huh). It does not randomly crash and it’s incredibly fast and smooth. Heck, GTK3 and Qt5 applications have perfectly smooth resizing (which is something I’ve only seen with Cocoa apps on macOS before).

                                                                                                                        Less off-topic: this is NOT a “shitty hack”. This is a rather elegant fix to a non-trivial mistake in the reference protocol implementation library libwayland. Asynchronous protocols are hard (but worth it)

                                                                                                                      1. 6

                                                                                                                        It’s nice to see that this vulnerability is fully mitigated in HardenedBSD with:

                                                                                                                        1. PaX ASLR
                                                                                                                        2. PaX NOEXEC
                                                                                                                        3. PIE
                                                                                                                        4. RELRO + BIND_NOW
                                                                                                                        1. 4

                                                                                                                          Asking as someone does not actively following FreeBSD anymore: why doesn’t FreeBSD have ASLR or use these changes from HardenedBSD?

                                                                                                                          1. 2

                                                                                                                            That’s a tough question, but I think it boils down to different priorities of FreeBSD developers and clashing personalities. I’m @lattera can speak about that.

                                                                                                                            1. 2

                                                                                                                              You’ll need to ask FreeBSD that question. I cannot and do not speak on their behalf.