1. 3

    Beautiful! I’ve done Prolog and (Jess/CLIPS) in the late 90s and IIRC I ran into exactly this issue as well. Also, made me search for available online systems now and stumbled upon this wonderful piece: https://swish.swi-prolog.org

    1. 1

      I read through this and I don’t feel like the article didn’t tell me anything about K8 Application Operators. I expected something about custom controllers or meta pods that manage other pods, but that wasn’t in the article.

      1. 1

        Maybe you have a different view on the role of an application operator? I’ve linked to the definition I’m subscribing to (stemming from Helm) in the first paragraph. What could be the case is that you’re thinking of the term “operator” (roughly: a CRD+custom controller) that CoreOS established. If this is so then indeed, the article has nothing to do with it.;)

      1. 1

        Really like the idea, but I think it’s still too complex. People want a Heroku like, with the power of k8s.

        PS: I think it would be better to link to the homepage of the project than the video (that is in the homepage).

        1. 1

          Thanks a lot and agreed re complex, but it’s a start ;)

          Yeah, homepage would have been better but seems I can’t change it anymore, I’m afraid :(

        1. 2

          Very nice! FWIW, put a quick walkthrough together, in case anyone wants to check it out …

          1. 1

            Why mock if you can have the real thing? Containers are here to help you testing the real thing across environments.

            Because mocking is WAY faster. It totally reminds me this talk https://www.youtube.com/watch?v=bNn6M2vqxHE on Ruby on Rails testing.

            1. 1

              Yes, I mentioned this as a drawback.

              1. 1

                I don’t think it’s fair to call this a “characteristic” with “a little overhead”.

                Two reasons why it might not be realistic:

                • microservices (having +100 docker images to fetch/launch/.. each time you want to test things)
                • it’s damn slow. When I test I expect a really quick feedback. A mocked test is in ms, a container one is in seconds… This multiplied by hundred of tests, is a huge load on CI servers and something impossible on my laptop…

                I like the idea of not having to mock, but alas, I’ll keep doing it.

                1. 1

                  I think you’re raising a good point. Will update the post with this concern, or, if you prefer and don’t mind you comment on the Medium post yourself? In any case I’d like to thank you and credit you with it …

              2. 1

                Mocking also allows me to test in isolation, which is the whole point of a unit test- I want to test a single, specific unit, that way if it doesn’t behave as expected, I know why (or at least, I’ve narrowed it down). Testing interactions with etcd with a real instance of etcd introduces all sorts of complexities that make it difficult to reason about the unit test.

                Integration tests, sure, whatever, have at. The closer your integration tests get to a real-world environment, the better.

                1. 1

                  Hmmm. OK, here’s a question: does the fact that in my example the unit is all about directly reading from and writing to etcd change anything concerning your statement? I suppose I’m not sure I understand where you draw the line …

                  1. 1

                    I mean, if you’re crossing a boundary, it’s not a unit. Modularize better, because you’re apparently not doing it.

              1. 3

                So my unit tests assume I have Docker installed and running? Err…

                1. 1

                  Yup, that would be the idea, I’m afraid …

                1. 1

                  Nice, thanks for sharing. Why would I want to use it over spf13/viper?

                  1. 1

                    Thanks for sharing. Bit off-topic but I honestly don’t know where else to ask: how are tags created/proposed?

                    1. 2

                      People have suggested these in new threads they write. With enough support they might be added.

                      1. 2

                        See this post for an example. I don’t think there’s any canonical way of doing it (the meta tag is a good starting point though).

                        1. 1

                          Thanks a lot, both!

                      1. 2

                        Please don’t submit digests or collections of articles, especially when at least one of them has already been submitted by itself. Thanks!

                        1. 1

                          OK, wasn’t aware of this policy.

                        1. 1

                          That’s interesting, indeed! How does it compare to Flocker and what are the implications around ClusterHQ’s recent shutdown?

                          1. 6

                            Note that restoring infrastructure services from storage targets is NOT YET implemented.

                            That note seems like it should be more prominently displayed? I’m sure the restore is much, much harder (interesting) than the backup, but without it, why back up at all?

                            1. 1

                              Agreed that restore is hard, and believe me, this is my top priority for now.

                              1. 1

                                FWIW, a first implementation of restore is available now: http://burry.sh/#restores

                              1. 3

                                This article seems to imply that a core component of docker was not OSS until now; is that true, or is it just shoddy reporting?

                                1. 5

                                  From my novice investigating, it looks like containerd has been open source since it’s gotten PRs from non-Docker persons:

                                  https://github.com/docker/containerd/pulls?q=is%3Apr+is%3Aclosed

                                  The article mentions moving containerd out of the Docker organization and into a neutral foundation which the Docker blog re-enforces but TechCrunch definitely used “open sourcing”/“open sources” wrong =/

                                  https://blog.docker.com/2016/12/containerd-core-runtime-component/

                                  1. 2

                                    Before containerd, there was no single module that one could use to execute Docker images as containers (except rkt, which always seemed like a proof of concept than a reliable tool). The capability was always there in the Docker code, just buried. Over time, they factored it out into a separate module and announced this back in April: https://blog.docker.com/2016/04/docker-containerd-integration/. I remember Michael Crosby talked about it at DockerCon back in June and was very excited, because we don’t like the Docker daemon, but we do want to run containers. Not exactly sure what the current “new release” is about, other than moving it out from under Docker and into the Open Container foundation?

                                    1. 2

                                      I suppose https://twitter.com/mreferre/status/809074712290689025 captures it best. Especially relevant in the context of Kubernetes (vs. rkt/AppC ;)

                                      1. 1

                                        I don’t understand that tweet – containerd was in the works well before the dustup around the Docker “fork”, which is what I presume the tweet was referencing.

                                        1. 1

                                          Hmmm, maybe around the politics vs. CRI?

                                    1. 2

                                      Thanks for sharing! Serendipity I guess: just read http://cacm.acm.org/magazines/2016/11/209133-learning-securely/fulltext this morning. Love it!

                                      1. 4

                                        Genius! I really enjoyed watching it remotely. And, Bryan, if you ever wanna do that in Europe, happy to help ;)

                                        1. 11

                                          So, we’re trying to figure that out. Clearly, we’ve tapped into something here – but I imagine I don’t have to tell you that running a conference isn’t exactly a low-stress or low time-commitment endeavor. We also want to figure out what, exactly, constitutes the Systems We Love “brand” so we can franchise it out for others to run their own Systems We Love conferences. We also need to figure out sponsorships for future conferences; we at Joyent have no interest in making money off this, but losing less money would probably be nice. ;)

                                          Definitely welcome ideas on any/all of this! The one thing we feel we know is that there is clearly demand for this content – which itself is inspiring!

                                          1. 4

                                            Right. Yeah, as a co-chair of the European Data Forum 2013 and countless user groups I think I have a bit of an idea what it means. Suggest you have a look at the super successful Devoxx franchise here in Europe as well, as a reference. I’d immediately know two locations in Europe (incl. partner and logistics): CodeNode in London (with SkillsMatter) and Software Circus in Amsterdam (with Container Solutions) + I can certainly ask our friends at MSFT if they wanna support this as well.

                                            This is such an awesome format and it would be a really great thing to make this accessible in person for Europe. Happy to do whatever it takes to make it happen!

                                        1. 6

                                          I really like the idea of taking unix pipes and making them a distributed communication substrate.

                                          A pull does not remove a message from a dnpipes, it merely delivers its content to the consumer.

                                          How does one restart consumption on failure and restart? It is a distributed system, after all.

                                          There also doesn’t seem to be a pushback mechanism, which I personally believe is really fundamental to getting pipes right. But I’m open to being shown I’m wrong. In distributed systems, though, a common failure case is overloading the other side with more work than it can keep up with.

                                          The reference implementation has some oddities too, like how do you send a message with the content RESET? Usually reference implementations are complete and correct but maybe not production ready.

                                          1. 2

                                            Both points you’re raising (failure and message with content RESET) are very valid points. Looks like I’ve got some more work to do ;)

                                            Ah, and @apy could you elaborate on ‘pushback mechanism’ please?

                                            1. 5

                                              Presumably he means backpressure.

                                              1. 4

                                                Yes, backpressure. In Unix pipes if you write data faster than the other side can process you’ll eventually be stuck in a write call or a non-blocking write will say to try again later. Your two options in a distributed system is to either queue data and hope you don’t run out of memory or to have backpressure. The latter is generally considered the more robust solution. If you plan on making a fundamental dist sys component I’d suggest reading through some of the classic literate. Check our @aphyr’s reading list or any of the others that show up in a Google search.

                                                1. 2

                                                  Thanks for the clarification and yes that’s exactly what Kafka is taking care of in my ref impl., maybe I should make this more explicit. Thanks also for pointing out the literature; it happens to be what I’ve been teaching in academic and industrial setups and training courses so I think I’ve got that part covered ;)

                                                  No, the goal is not to build a component, the goal is to write a spec, to document a pattern I’ve seen being used in practice, potentially resulting in an RFC.

                                                  1. 4

                                                    By component I do not mean a software artifact.

                                                    Kafka semantics don’t really match Unix pipe semantics so I’m not sure binding yourself to it makes a lot of sense. Data in Unix pipes is generally transient. Have you looked at what what it’s like to implement message queues with pipes on 9p?

                                                    It’s fine if you just want a simpler Kafka client but I’d be weary of calling it “distributed pipes” as that doesn’t really convey the reality to people familiar with conventional pipes.

                                            1. 4

                                              While it has taken far too long, I am grateful that IPv6 is finally being supported on EC2. Now just waiting for Google Cloud…

                                              Interesting to note that Amazon are limiting IPv6 addresses to 8 per instance. To me IPv6 is a better approach for container networking, as opposed to the raft of SDN solutions and NATing that has evolved around containers, but this would limit me to 8 containers per host.

                                              1. 1

                                                Wow, and there I thought Google (having a rather decent networking infra) would support IPv6, but seems you’re right when looking at GCE ISSUE-8. On the other hand, Azure seems to support IPv6 alright.

                                              1. 12

                                                That was not a particularly good article–my impression is basically “shill for datacenter container company…shills containers for the datacenter”.

                                                1. 3

                                                  “Cloud Advocate” did it for me

                                                  1. 2

                                                    Point taken. I’ve removed the ‘Cloud’ bit from all my social media profiles. Dunno why I used it in the first place.

                                                  2. 2

                                                    Author of said article. I’m sorry if shill for xxx is the only thing that came out of it. Certainly not my intention, it was just a brain dump, something that came to mind during a Saturday afternoon walk and I thought it might be worth it sharing this observation. Clearly, I fudged up ;)

                                                    1. 3

                                                      My problems with this piece:

                                                      1) You use different definitions of POSIX throughout - the bits about DCOS refer to the idea of a general standard, while the bits about Distributed Filesystems are talking bout the difficulties of implementing actual POSIX, the real standard. 2) Lots of these things aren’t linked. The post about “Don’t build private clouds” and the posts about running kubernetes have basically nothing in common with one another. They’re just articles vaguely about cloudy things.

                                                      This bills itself as “This is a brain dump of thoughts about emergence of standards for running distributed systems” which would be a good & interesting article. In practise it seems to be more “This is some things I read recently about containery/cloudy stuff”.Which could potentially be interesting (but in this case, looks a bit shoehorned around a very poorly fitting theme).

                                                      1. 1

                                                        All fair points. They certainly are/were linked in my brain. And I take it I did a poor job communicating them ;)

                                                        What alternative moniker (re POSIX) would you suggest?

                                                        1. 3

                                                          I’m not sure, because I’m not 100% sure what you’re actually trying to talk about. If it’s “emerging standards for containerised scheduling”, then maybe just call it that?

                                                          1. 2

                                                            So, the problem with using POSIX is that it immediately triggers in the reader (at least for me!) the standardization of a runtime environment, by defining system headers, system facilities (signals, shared memory, etc.), and system utilities (guaranteed shells and programs).

                                                            It’s basically saying “Hey, look, programmer–this is what you’ll have available to talk to. Hey sysadmin, this is what you can script with”, and then stopping.

                                                            When I read something called “dPOSIX”, I expect to see some kind of document describing:

                                                            • Standard network topologies or messaging systems programs can rely on
                                                            • Standard mechanisms for handling tim
                                                            • Standard interfaces for accessing stuff like block storage or whatever
                                                            • Standard RPC mechanisms (yuck) and protocols for communicating across POSIX systems

                                                            …and all of that quite aside from containerization, and quite aside from any particular language prescription.

                                                            Your link to the modern stack anatomy shows something that is indicative of the whole problem: it’s basically a catalog of different vendors to pick from, without really explaining the problems and what motivates them. Most of the modern writing on this topic suffers from that: it supposes that microservices and containers and everything are what are needed, without really explaining under what circumstances they’re desirable.

                                                            Even when they are desirable, it’s not obvious that simply switching to a better ecosystem (cough erlang cough) wouldn’t solve the majority of pain points.

                                                            EDIT: Oh, and thank you for doing the writeup and then asking for feed back here. Good luck in future articles :)

                                                            1. 1

                                                              … the problem with using POSIX is that it immediately triggers in the reader … the standardization of a runtime environment, by defining system headers, system facilities … and system utilities

                                                              And this is something I think is positive and useful for interop. Not sure why I get the impression that it sounds like something negative when I read it from you ;)

                                                              When I read something called “dPOSIX”, I expect to see some kind of document describing:

                                                              Exactly. You are already a step further, hammering out the details. I was just leaning back and squinted a bit: asking myself if I see a pattern here. Not even that I’m sure of ATM.

                                                              Standard network topologies or messaging systems programs can rely on

                                                              OK, so a few candidates in this space: CNI, Calico, Weave as well as Kafka and RSMQ and many more via http://queues.io

                                                              Standard mechanisms for handling tim

                                                              Assuming you mean ‘time’ here: NTP and if you’re fancy TrueTime

                                                              Standard interfaces for accessing stuff like block storage or whatever

                                                              REX-ray, Flocker for on-prem and no clear winner for public cloud (?)

                                                              Standard RPC mechanisms (yuck) and protocols for communicating across POSIX systems

                                                              Well here I do have a strong opinion: gRPC (/me ducks)

                                                              EDIT: Oh, and thank you for doing the writeup and then asking for feed back here. Good luck in future articles :)

                                                              Hmmm, not sure if that was meant sarcastic or not but if not I take it and say thank you.

                                                              1. 1

                                                                And this is something I think is positive and useful for interop. Not sure why I get the impression that it sounds like something negative when I read it from you ;)

                                                                I don’t mean it in a negative way–it’s just that for me I kinda expect a discussion of problems and concrete patterns of solutions, rather than a display of different vendors solving things in different ways. If I had to put my finger on it, I’d say that it’s almost a signifier of a more academic/engineering discussion than we get just by comparing existing technologies.

                                                                Assuming you mean ‘time’ here: NTP and if you’re fancy TrueTime

                                                                Or PTP, in some cases.

                                                                Hmmm, not sure if that was meant sarcastic or not but if not I take it and say thank you.

                                                                No sarcasm intended! I genuinely appreciate you discussing the feedback on your article.

                                                                1. 1

                                                                  Thank you, @angersock for clarifying this. I’m always interested in learning and hopefully making new mistakes every day rather than repeating old ones ;)

                                                                  Let’s see what comes out of it, but I’m already now grateful that due to the post I stumbled upon this wonderful site here, win!

                                                    1. 7

                                                      Note to potential readers: this has nothing to do with “distributed systems” or “POSIX”.

                                                      1. 1

                                                        I beg to differ. It has all to do with distributed systems. I can see why you say that it has little to do with POSIX, though ;)