1. 3

    same CPU as in atomic pi, selling for $35(+psu).

    1. 1

      I gather the Atomic Pi is a surplus board with a finite supply, whereas this seems like a product that is being actively produced.

      1. 1

        If you can actually purchase one for $39, I didn’t see anywhere you can get it. The atomic pi has the worst io and power supply. I wish there could be something cheaper and more available. The only models they have is 59$ without heat sink, check the atomic pi for that gigantic heat sink. Running x86 with passive cooling is nearly either impossible or requires a huge chunk of aluminum.

    1. 1

      i doubt TLS is mandatory in MQTT - i’ve seen implementations that fit in microcontroller. if you dont need encryption - dont use it.

      1. 1

        They said it was the only way to have encryption, which is wrong. Since MQTT just passes around bytes, you can use whatever encrytion you want to encrypt the message you publish. TLS is one of the easiest for key negotiation, and most mqtt implementations offer that option probably because TLS is pretty common everywhere else.

        1. 1

          To be fair, AWS IoT forces TLS.

          1. 1

            That might be true but I’ve been using MQTT on and off for years and had never even heard of AWS IoT.

        1. 5

          What software actually uses Intel AMT? Like, what’s the management server software that controls corporate devices? I don’t do IT so I’ve never had a reason to know.

          Also, how do you choose to use the “consumer” version of Intel ME? That sounds like something I’d want to do. I’m aware of microcode updates but my intuition tells me those aren’t related.

          1. 3

            There’s multiple ways to enable and configure AMT, but apparently you can just do it from the firmware setup screen, and then it just hosts VNC and HTTPS access. A popular/recommended management system seems to be MeshCommander

            1. 2

              that’s the right question. very few computers can be remotely controlled via AMT, yet the firmware is active on almost all of them.

              notable exception, that I’ve seen are hp Z-series workstations, that are IN THEORY remotely controllable by AMT WebGUI.

              It feels like intel is charging vendors for fully enabling AMT, I got no other explanation why its so uncommon.

              1. 2

                Also, how do you choose to use the “consumer” version of Intel ME?

                With most manufacturers, all you can do as a buying customer is choosing a device where the Intel CPU does not have vPro.

                vPro and AMT (Active Management Technology) mean the same thing and are the part of the Intel ME (Management Engine) that make remote administration possible. There are many other parts to the ME that are present in every Intel CPU, irregardless of it having vPro or not.

                Example: The Lenovo X1 Carbon 2017 is/was available with an i5 CPU with vPro and without vPro. This is also reflected by the CPU-type number: Intel i5-8265U means without vPro and Intel i5-8365U means with vPro.

                If you wanted to go further, there are possibilities to disable almost all of the ME’s functionalities or even remove the relevant binary code from the firmware before flashing it back into your device’s firmware storage chip. Unfortunately this procedure is no easy task, the necessary steps are highly dependent on your specific device and you could maneuver yourself into a situation where you don’t know how to recover your bricked device anymore. But it’s much more doable than a few years ago.

                Lastly there are a handful of manufacturers, that disable as much of ME’s functionalities as possible by default (basically with the same procedure as mentioned above) and replace some of the necessary functionalities with open source firmware (usually Coreboot). Purism and system76 are two examples of such manufacturers.

                See my other comment about an overview talk regarding the Intel ME for further research.

              1. 1

                If they are vulnerable to plain, basic, well-known XSS like this, i wonder what are they using to render the web pages? Because every modern language/framework covers this by default.

                is this PHP and they are ignoring the security practices? CGI? nothing else even comes to my mind.

                1. 2

                  Note that this was found and fixed a number of years ago.

                  1. 1

                    Every team has their own bit IIRC.

                    Of course, that means that if any one team messes up, the whole thing is vulnerable.

                  1. 2

                    The curse of VLIW/explicit parallel architecture is that “smart enough” compilers never appear. Itanium taught a valuable lesson.

                    1. 3

                      I wonder how true that is now - LLVM packs an insane amount of intelligence in a compiler.

                      The problem, I thought, was more trying to extract parallelism blood from mostly serial general purpose computing stones.

                      1. 5

                        LLVM has one VLIW target: Hexagon. It’s a DSP, so most of the code that targets it is written by people who are happy to tweak the source to get good performance.

                        VLIW architectures are difficult for compilers. Compilers work on basic blocks, which are sequences of branchless instructions. It’s fairly trivial for a compiler to create VLIW instructions that don’t have data dependencies within a basic block. There are only two problems:

                        • Coming from C, the average length of a basic block is 7 instructions.
                        • There are often data dependencies between them.

                        To get really good performance out of VLIW, you need to do the same kind of things that you need for autovectorisation. In particular, you need to do good predication so that you can move instructions between basic blocks and execute them speculatively and discard the results if they’re not needed. That, unfortunately, removes a lot of the power advantages of VLIW.

                        VLIW architectures do work reasonably well as JIT targets, where you can build traces of common-path instruction streams and optimise those, running more slowly on cold paths. The most widespread general-purpose VLIW chips are nVidia’s ARM cores, which have a VLIW pipeline, a state machine that translates ARM instructions to inefficient VLIW instructions, and a small JIT compiler that takes hot code paths and generates efficient VLIW sequences (with side exits if you leave the hot path). The nVidia VLIW design is quite unusual because the long instructions are slightly offset, so each instruction can accept output of the earlier ones in the same bundle as its input without going via register renaming. That’s quite similar to an EDGE architecture in some ways.

                        1. 1

                          the average length of a basic block is 7 instructions.

                          Interesting. Is there a (publicly available) source for this?

                          1. 2

                            I came across this heuristic in one of the early Berkeley RISC papers (as one of the motivations behind the RISC design). I wondered if it had changed, so I set the first assignment in the compilers course that I used to teach to test it. Students had to find a bit of source code that they thought was interesting and then modify the compiler to collect these statistics and present a histogram of basic block sizes (just to get them comfortable with hacking on LLVM, before they did anything difficult - this filtered out the students who would not be able to hack on a large existing C++ codebase). They pretty much all found that, whatever codebase they picked, the numbers were about the same.

                        2. 4

                          LLVM is overfitting for C semantics and current architectures.

                          Rust can’t even tell LLVM its complete pointer aliasing information (above the minimum C has), because in LLVM these codepaths aren’t battle-tested and are too buggy to use. And aliasing information is a basic requirement for VLIW.

                          It’s a chicken-egg problem. I’m sure that if there was a big push for VLIW support, LLVM could be made more suitable for it, but it’s nowhere near there yet. Autovectorization barely works.

                      1. 9

                        GUI are superior sometimes. One aspect - better feature discovery and less learning how changing one thing affects the others.

                        1. 9

                          UI discoverability is orthogonal to {G,C}UI IME. I’ve seen great documentation come out of --help and man pages, and I’ve seen baroque and opaque GUIs.

                          1. 16

                            UI discoverability is orthogonal to {G,C}UI IME.

                            I don’t think this is true. The default for GUI is a bunch of buttons you can click to see what happens. The default for CLI is a blank screen. Using a manual (–help) as a counter example isn’t arguing for CLI, but arguing for good manuals.

                            When you are in a french bakery, pointing at a croissant is using a GUI, and trying to say things randomly hoping to make sense is using a CLI. Clearly one is superior to another, both in immediate usability, and discoverability.

                            1. 4

                              The default for GUI is a bunch of buttons you can click to see what happens.

                              Not necessarily for three reasons:

                              1. Menus. GUIs tend to have menus, and menus are not discoverable unless you and the developers spontaneously develop the same ontology and, therefore, classification scheme for the program’s functionality. Conventions help, but conventions apply to CLI programs, too.
                              2. Buttons need to be labeled somehow, and iconic labels tend to verge on the pictographic. Even textual labels can be hard to decipher, but the trend is towards images, which are smaller and (supposedly) don’t need to be translated for different audiences.
                              3. The increase of mobile UIs as being the new default for GUIs means fewer buttons and more gestures, and gestures aren’t signposted. Can I pinch something here? Do I need to swipe? Is this operable now? Do I need to long-touch it or caress it in some other fashion? It might be good for you, but it’s remarkably inscrutable as opposed to discoverable.
                              1. 12

                                and menus are not discoverable

                                I disagree here. Going through the menus to see what options you have is a pretty common thing to do.

                                The increase of mobile UIs as being the new default for GUIs

                                But the mobiles GUIs are not discoverable the same way CLIs are not discoverable. There are no buttons and you are supposed to know what to do (the right incantation or the right gesture). But even then swipe and pinch are more intuitive than shlib -m -t 500

                                1. 2

                                  I disagree here. Going through the menus to see what options you have is a pretty common thing to do.

                                  Every user I have had an interaction with never looks through the menus; I can for certain tell you nine out of every ten people I work with are unaware the menu bar exists for the majority of executables they use and if they are aware of its existence its only to click something they have been shown how to use; they will rarely venture far from that well trodden path and its frustrating as a little bit of curiosity would have solved their question before they asked it.

                                2. 2

                                  #3 is the big one for me. I have no idea how to use most mobile apps. Whereas with a well-designed CLI I can mash TAB and get completion at its repl or through its shell completion support, and work out what the thing can do.

                                  1. 4

                                    Leave out “well-designed” and you’re back in pinch-bash-swipe land on your CLI as well.

                                3. 2

                                  A GUI is immediately useable in that you can see buttons and other widgets, and with the mouse you can point and click. However for GUI’s, as they are mostly implemented in todays world, the advantage stops there. There is no next step in that you can improve your mastery, possibly except for your mastery of mouse precision and keyboard shortcuts. However in CLIs, as they are today, there is a steep curve to getting started, however because of certain properties of the CLI, as the article mentions, when you have interned those key principles, the sky is the limit.

                                  GUIs are excellent for basic usage, but doesn’t allow any kind of mastery. CLIs are tough to get started with, but allows all kinds of mastery. That’s a key distinction. However I don’t think those properties are inherent to “GUIs” and “CLIs”, it’s just inherent in the way they have been implemented in our current software landscape.

                            1. 6

                              bazqux, its not free, though

                              1. 1

                                I will probably move to them soon for their FB support.

                              1. 2

                                at uber… we’ve been preaching microservices since 2018.

                                1. 11

                                  It’s been since 2015 and I would hesitate to call it “preaching”: it’s been sharing what worked over the eng blog, and what did not work for Uber at the time.

                                  Uber went from monolith to SOA in 2015. This SOA followed a microservice-based architecture. And different engineers have been sharing what what they’ve learned along the way: the steps it usually takes to build a microservice, addressing testing problems with a multi-tenancy approach or how and why teams use distributed tracing. We also open soruced some of our tools like Jaeger, which is part of the Cloud Native Foundation’s graduated projects, alongside Kubernetes and Prometheus.

                                  I’ve not seen anyone preaching, meaning anyone wanting to convince or convert anyone else. I personally tell people “here’s what we do, but your mileage will very likely vary”. I’ve always found it interesting to understand how other companies address their challenges and what worked and why.

                                  Also, you don’t need to look far to hear all sorts of different things that work for other companies - some that might seem unconventional for companies of their size or traffic. Shopify shared how they still are a monolith, abeit a modularized one. Stack Overflow shared how in 2013 they ran on a lean hardware stack that scaled up by 2016, but still includes zero usage of the cloud. And I could go on the list.

                                  All of these can serve as inspiration: but the end of the day, you need to make decisions in your environment that you think will work best. Anyone copying the likes of Google, Uber, Shopify, Stack Overflow or others when they’re not even similar in setup will be disappointed.

                                  1. 3

                                    Anyone copying the likes of Google, Uber, Shopify, Stack Overflow or others when they’re not even similar in setup will be disappointed.

                                    That is exactly what everybody around me is doing. Nobody knows the way to success, so they are copying the behaviors of famous successful companies.

                                    Since i cannot edit my original comment, I want to back off on my tone. Its not like I have anything against people in uber. Its just too many conference talks i see where engineers talk how microservices solve problems, and forget to mention the new problems appearing. And never retroactively admitting the microservices were ever a mistake.

                                    1. 2

                                      What problems did moving from a monolith solve for y’all? Were they more people problems or technical ones?

                                    2. 2

                                      Exactly. :-)

                                      For context, I was sweeping a set of around 150 processes running on a good dozen machines into a single JAR back in 2003, with the huge performance increases and reliability/deployment improvements you might expect.

                                    1. 1

                                      MS really dropped the ball with “Virtualization Based Security”. First and foremost, its enabled silently, and is HARD to disable(editing BCD, really?) Second, is what this article mentions - it disables all non-hyperv virtualization on the host. You don even need to run hyper-v, its just all other virt solutions not working on recent windows 10 out of the box. More annoyingly, some 3rd party projects (docker-for-windows) requires hyperv.

                                      1. 1

                                        What gets me is the lack of nested virtualisation.

                                        Ok, hyperv is their solution to a heap of problems. If I can run a Linux vm in virtual box in a windows vm in parallels on a Mac, (or similarly run hyperv in the windows Vm) why can’t Microsoft make it work for hyperv?

                                        I don’t have to deal with windows much these days but occasionally I need to provide advice/support/tooling that covers some windows using developer.

                                        Stuff like this is why I’m skeptical of how realistic the “I use Windows + wsl and it’s all sunshine and lollipops” claims are.

                                      1. 1

                                        to undercut IBM model M snobs and to point out that you dont need fancy keyboard for comfortable typing, Lenovo 73p5220. no, its not mechanical, yet its similar to what i’ve been typing all my life.

                                        tbh, on work i type das keyboard 4, but thats only because my predecessor left it. i dont feel any comfort in mechanical switches, nor do my colleagues, but i kinda appreciate this volume dial.

                                        1. 3

                                          Hm, ‘model M snob’ sounds a bit odd given the wave of expensive suggestions people are coming with here. Meanwhile I’ve been using my model M for about 27 years now - going from plugging it in using the full-size DIN plug to using a PS/2 adapter to using a PS/2 adapter combined with a USB adapter - without looking back. As to whether any of the fancy brands suggested elsewhere in this thread would survive than long remains to be seen.

                                          1. 1

                                            Agreed, I have two Model Ms, a SSK I use at home and a full size at work. My coworkers got used to the clicking, because it sounds amazing! I have been using both these keyboards for decades and they are still working like the day I first received them. it is not snobbery, I enjoy a piece of hardware that feels like an industrial work horse and lasts. I never sat down at one of my Model Ms and asked, what is wrong with this keyboard. The same cannot be said for just about every other keyboard I have ever used for any amount of time.

                                        1. 2

                                          What is your favorite database embedded language? I am quite displeased at syntax of pl/sql in every dbms. What’s your opinion on pl/python? Is it easy to debug and version pl/* scripts?

                                          1. 5

                                            I’ve been using a lot of Postges with PL/* in the past. Depending on the exact implementation of both the extension and how the language works it can be cumbersome and inefficient to use non-PL/SQL (or PL/pgSQL).

                                            For this reason I would start out with PL/pgSQL. For simple functions it will be okay, for more complex functions you really should read up on limitations of the particular language. Be extremely careful. With a database like PostgreSQL one is used to extreme reliability, data and access to data being very safe. This of course changes when you are interfacing a complete programming language.

                                            I agree, pl/sql is cumbersome, but overall it is still a good starting point until you really have a reason to switch. It’s also good advise to not just switch away, because you feel more confident in another language, because you will usually end using a very basic subset and a lot of the expierence that counts is on handling the interface to the database in a secure (as in not messing up data) and efficient (so you don’t block too long) way.

                                            Depending on what exactly you do this for and especially if you want to dig deeper it can make sense to look into creating own extensions.

                                            Regarding debugging: From my experience things don’t get easier if you switch the language, again because a lot will be around interfacing with the language. Compared to normal operation (plain SQL) it’s easy to completely lock up your database, especially with bigger or not as widely used pl/* languages.

                                            One last thing worth mentioning: Do have a look at what a database like PostgreSQL offers you on the SQL side. I’ve seen so many cases of where people both add complexity, sacrificing performance and data integrity because of only relying on ORMs and the most basic queries that work well with them. As soon as things become a bit more complex it’s really handy to know what your DBMS offers you. Views,, WITH queries (Common Table Expressions), dealing with and indexing JSON, virtual and computed columns can really make your live easier, without making a lot of these sacrifices. Sometimes these are better options than moving computations out of the database or creating complex functions or adding columns that are actually redundant as a workaround.

                                            1. 4

                                              If you’re asking me as the author of the piece, unfortunately I haven’t had the opportunity to dive in to embedded database languages. The idea that the database is a dumb store is quite deeply embedded in the companies I have worked for. Part of the reason I wrote this was to sway people.

                                              Looks like sivers is the person you want to ask! :)

                                              1. 3

                                                I basically agree with @reezer, I’ve used PL/Python quite extensively, for over a decade, and in most respects it’s AWESOME to have python run against queries. But you have to be careful, putting python(or any language there) as it gives you total access to the language and the database, so while it’s totally possible to have a table query run out to python and then have python go call some random HTTPS service, and write a bunch of data out to disk, or call some other code somewhere all willy nilly, if you do those things, it will probably end up becoming very very painful. Just because you can do something doesn’t mean you should.

                                                We haven’t had any issues with debugging really. You can test outside of the DB for all of the logic portions(using your normal testing practices), and then it’s just the interface, which is quite stable, so once you understand it, it’s pretty hard to screw up.

                                                PL/PGSQL is definitely a great choice, as it doesn’t include the entire kitchen sink, so it’s harder to screw up. That said, most all of our stuff is PL/Python, since our main product using the DB is written in Python, it’s just easier to keep it all together in python. Overall it’s been great.

                                                Definitely use a schema version control system, for us liquibase, but there are many other solutions out there. Liquibase has been fine for us, with no reason to move off of it in over a decade.

                                              1. 37

                                                At my former employer, for a time I was in charge of upgrading our self-managed Kubernetes cluster in-place to new versions and found this to eventually be an insurmountable task for a single person to handle without causing significant downtime.

                                                We can argue about whether upgrading in-place was a good idea or not (spoiler: it’s not), but it’s what we did at the time for financial reasons (read: we were cheap) and because the nodes we ran on (r4.2xl if I remember correctly) would often not exist in a quantity significant enough to be able to stand up a whole new cluster and migrate over to it.

                                                My memory of steps to maybe successfully upgrade your cluster in-place, all sussed out by repeated dramatic failure:

                                                1. Never upgrade more than a single point release at a time; otherwise there are too many moving pieces to handle
                                                2. Read change log comprehensively, and have someone else read it as well to make sure you didn’t miss anything important. Also read the issue tracker, and do some searching to see if anyone has had significant problems.
                                                3. Determine how much, if any, of the change log applies to your cluster
                                                4. If there are breaking changes, have a plan for how to handle the transition
                                                5. Replace a single master node and let it “bake” as part of the cluster for a sufficient amount of time not less than a single day. This gave time to watch the logs and determine if there was an undocumented bug in the release that would break the cluster.
                                                6. Upgrade the rest of the master nodes and monitor, similar to above
                                                7. Make sure the above process(es) didn’t cause etcd to break
                                                8. Add a single new node to the cluster, monitoring to make sure it takes load correctly and doesn’t encounter an undocumented breaking change or bug. Bake for some day(s).
                                                9. Drain and replace remaining nodes, one a time, over a period of days, allowing the cluster to handle the changes in load over this time. Hope that all the services you have running (DNS, deployments, etc.) can gracefully handle these node changes. Also hope that you don’t end up in a situation where 9/10 of the nodes’ services are broken, but the remaining 1 original service is silently picking up the slack and hence nothing will fail until the last node gets replaced, at which point everything will fail at once catastrophically.
                                                10. Watch all your monitoring like a hawk and hope that you don’t encounter any more undocumented breaking changes, deprecations, removals, and/or service disruptions, and/or intermittent failures caused by the interaction of the enormous number of moving parts in any cluster.

                                                There were times that a single point release upgrade would take weeks, if not months, interspersed by us finding Kubernetes bugs that maybe one other person on the internet had encountered and that had no documented solution.

                                                After being chastised for “breaking production” so many times despite meticulous effort, I decided that being the “Kubernetes upgrader” wasn’t worth the trouble. After I left, is seems that nobody else was successfully able to upgrade either, and they gave up doing so entirely.

                                                This was in the 1.2-1.9 days, for reference, so though I’d be very surprised things may be much better now.

                                                1. 33

                                                  tldr; If you can’t afford 6+ full-time people to babysit k8s, you shouldn’t be using it.

                                                  1. 13

                                                    Or, at least, not running it on-prem.

                                                    1. 6

                                                      True, if you out source the management of k8s, you can avoid the full-time team of babysitters, but that’s true of anything. But then you have the outsourcing headache(s) not including the cost(like you still need someone responsible for the contract, and for interacting with the outsourced team).

                                                      Outsourcing just gives you different, and if you selected wisely, less, problems.

                                                      1. 5

                                                        True dat. But every solution to a given problem has trade-offs. Not using Kubernetes in favour of a different orchestration system will also have different problems. Not using orchestration for your containers at all will give you different problems (unless you’re still too small to need orchestration, in which case yes you should not be using k8s). Not using containers at all will give you different problems. ad infinitum :)

                                                        1. 6

                                                          Most companies are too small to really need orchestration.

                                                          1. 2

                                                            Totally!

                                                    2. 2

                                                      I keep having flashbacks to when virtualization was new and everyone was freaking out over xen vs. kvm vs. VMWare and how to run their own hypervisors. Now we just push the Amazon or Google button and let them deal with it. I’ll bet it 5 years we’ll laugh about trying to run our own k8s clusters in the same way.

                                                      1. 7

                                                        Yeah, this is the kind of non value added activity that just beg to be outsourced to specialists.

                                                        I have a friend who work in a bakery. I learned the other day that they outsourced a crucial activity to a contractor: handling their cleaning cloths. Everyday, a guy come to pick up a couple garbage bag full of dirty cleaning cloth, then dump the same number of bag full of cleans one. This is crucial: one day the guy was late, and the bakery staff had trouble keeping the bakery clean: the owner lived upstairs and used his own washing machine as a backup, but it could not handle the load.

                                                        But the thing is: while the bakery need this service, it does not need it to differentiate itself. As long as the cloth are there, it can keep on running. If the guy stop cleaning cloth, he can be trivially replaced with another provider, with minimal impact on the bakery. After all, people don’t buy bread because of how the dirty cloth are handled. They buy bread because the bread is good. The bakery should never outsource his bread making. But the cleaning of dirty cloth? Yes, absolutely.

                                                        To get back to Kubernetes, and virtualization : what does anyone hope to gain by doing it themselves? Maybe regulation need it. Maybe their is some special need. I am not saying it is never useful. But for many people, the answer is often: not much. Most customers will not care. They are here for their tasty bread, a.k.a. getting their problem solved.

                                                        I would be tempted to go as far as saying that maybe you should outsource one level higher, and not even worry about Kubernetes at all: services like Heroku or Amazon Beanstalk handle the scaling and a lot of other concerns for you with a much simpler model. But at this point, you are tying yourself to a provider, and that come with its own set of problems… I guess it depends.

                                                        1. 2

                                                          This is a really great analogy, thank you!

                                                          1. 2

                                                            It really depends on what the business is about: tangible objects or information. The baker clothes, given away to a 3rd party, do not include all personal information of those buying bread. Also, business critical information such as who bought bread, what type and when is not included in the clothes. This would be bad in general, and potentially a disaster if the laundry company were also in the bread business.

                                                            1. -7

                                                              gosh. so much words to say “outsource, but not your core competency”

                                                              1. 1

                                                                Nope. :) Despite my verbosity we haven’t managed to communicate. The article says: do not use things you don’t need (k8s). If you don’t need it, there’s no outsourcing to do. Outsourcing has strategical disadvantages when it comes to your users data, entirely unrelated to whether running an infra is your core business or not. I would now add: avoid metaphors comparing tech and the tangible world because you end up trivializing the discussion and missing the point.

                                                        2. 3

                                                          As a counterpoint to the DIY k8s pain: We’ve been using GKE with auto-upgrading nodes for a while now without seeing issues. Admittedly, we aren’t k8s “power users”, mainly just running a bunch of compute-with-ingress services. The main disruption is when API versions get deprecated and we have to upgrade our app configs.

                                                          1. 2

                                                            I ahd the same problems with OpenStack :P If it works, it’s kinda nice. If your actual job is not “keeping the infra for your infra running”, don’t do it.

                                                          1. 5

                                                            Meanwhile: https://news.ycombinator.com/item?id=22479178 SETI@home shuts down after 21 years

                                                            In an announcement posted yesterday, the project stated that they will no longer send data to SETI@home clients starting on March 31st, 2020 as they have reached a “point of diminishing returns” and have analyzed all the data that they need for now.

                                                            1. 2

                                                              This is really sad news but glad it inspired lots of people. I know it inspired me.

                                                            1. 2

                                                              I’m staying at home because I got something. Maybe cold, maybe flu. Or maybe something I dont have any budget to diagnose. The remedy is all the same for these viruses.

                                                              1. 3

                                                                If you’re sure it’s virus a that might be reasonable, but bacterial infections can be very dangerous without antibiotics.

                                                              1. 1

                                                                How it is nonblocking? bounded queues as always?

                                                                1. 3

                                                                  Looks like it. Upon quick glance, with same question, seems to use a channel for each loglevel, and then merges them out to the destination.

                                                                  1. 2

                                                                    there is minikube, if you are into k8s type of thing on a single host.

                                                                    special shout out that this article is detailed on the steps taken, not like “run this single shell script” and get the cluster running. there’s too much (auto/)magic in the other k8s how-tos.

                                                                    1. 1

                                                                      Thanks! We feel the same way! Minikube was to much magic for us and that’s why we did this write up.

                                                                      IMO kubeadm is very close to too much magic too.

                                                                      1. 1

                                                                        Arguably the “no magic” approach is simply Kubernetes the hard way from Kelsey H.

                                                                        1. 1

                                                                          Yep! That’s why we wrote https://github.com/alta3/kubernetes-the-alta3-way - Same approach but using ansible and not tied to Google Cloud.

                                                                    1. 5

                                                                      What kind of gui? What controls you need? Where will it run?

                                                                      1. 4

                                                                        Notebook/REPL/IDE-type thing. Desktop platforms (linux primarily, but should work with all three).

                                                                        1. 5

                                                                          If you don’t mind non-native widgets, I’d suggest looking at ReveryUI. Check out OniVim for an example application built with it.

                                                                          1. 2

                                                                            I thought (without having any experience with Revery) that a large part of the appeal was that it compiled to native code? Or by “non-native widgets”, do you mean something else?

                                                                            I’ve been looking at Revery for a project I’ve been sketching out; it seems like a very nice option. If you have any experience building things with it, I’d be interested to hear what you think - one small downside to adopting the framework now is that it seems not much other than Oni2 has been built with it yet (and so there’s more concern about documentation, stability, etc.)

                                                                            1. 2

                                                                              I thought (without having any experience with Revery) that a large part of the appeal was that it compiled to native code? Or by “non-native widgets”, do you mean something else?

                                                                              You’re right that it does compile to native code and that’s one of it’s great benefits.

                                                                              Some people prefer their apps to have a “native” feel, by which I mean, buttons, input, windows, and other UI elements have the Operating System’s design. This is not what you get with Revery, each app will mostly look the same on each platform.

                                                                              I’ve been looking at Revery for a project I’ve been sketching out; it seems like a very nice option. If you have any experience building things with it, I’d be interested to hear what you think.

                                                                              At the moment I’ve been mostly playing around with tiny projects while I learn ReasonML. I do really like it, but I have not got the experience with a large project, releasing, or long term maintenance to give all the disadvantages.

                                                                              One negative I can give is that I feel I’ve had to learn React to learn ReactReason and Revery. As someone who mostly used Vue before rather than React this has added an extra hurdle.

                                                                            2. 1

                                                                              ReasonML is an absolute pain to setup on a new machine. I wish you luck if you follow that path.

                                                                              1. 3

                                                                                What setup are you using? It seems to be as easy as doing:

                                                                                sudo apk add reason

                                                                                on alpine

                                                                                1. 3

                                                                                  I didn’t come across any issues when I did it a few months ago. Granted, I’m on MacOS so I couldn’t say if this experience is different on Linux.

                                                                                  1. 2

                                                                                    Maybe I should try again. Last time I tried to install anything with ML on the bottom, it left junk all over my computer.

                                                                                  2. 1

                                                                                    Can you say more? I’ve been playing around with ReasonML and haven’t had any trouble getting going (at least, on Linux - I can’t speak to other OSes).

                                                                            1. 4

                                                                              dont implement AMP. dont link AMP. dont provide AMP integrations and libs.

                                                                              1. 4

                                                                                tape is very sensitive to humidity and temperature.

                                                                                Temperature: 16 °C to 25 °C (61 to 77F) Relative humidity: 20% to 50%.

                                                                                Data loss is quite possible, and real when something goes wrong with climate control in your data center.

                                                                                Its not exactly fair to say that take is more resilient than HDD storage, unless reliability of climate control is factored.

                                                                                Also, I still remember the LTO8 patent dispute when LTO8 tapes were impossible to obtain for years because the two vendors were suing each other and there was no competition to fill in the market.