Threads for isra17

  1.  

    I feel the pain. Our organization is currently going through a primary domain change. We are still in the planning phase and it seems like it’s going to be overwhelming, trying to support 50 people, including non-technical people that have to do the same tricks that the author. I think so far the surprising applications that support frictionless transition are Tailscale and Archbee.

    1.  

      Not that I’m disagreeing with the premise per se but those all sound like “email was not the problem”, but the Google account as a third party canonical auth provider.

      When I have x@example.tld at any service (NOT backed by anything), I can usually change to y@example.com and it just works. There is not even a possible workflow where it would assume I am new user unless I click “sign up” again. In this case I assume the software has some half-baked idea of identity “user is authed via Google, and the handle is x@example.org” but then it falls flat when it’s the same “user” but with a different handle-

      1.  

        It’s common for SSO application support to auto-provision user account. Therefor when an application see a new a new user (or email) coming from SSO it sends them to the sign up flow and add the new account to the configured organization. The issue here is that most application identify user based on email and not a persistent IdP identifier.

      1. 3

        Please don’t post rollups and newsletters. Specific articles with specific information are way more useful and easier to discuss.

        1. 9

          I disagree, this is a good submission. Curation is knowledge work, too: curated collections are more than the sum of their parts. We’re enjoying a site right now that uses votes for day-to-day front page curation — by the same token, there is value in a newsletter that uses judgement to curate and summarize a month’s Python news.

          Moving from abstract arguments to concrete ones, this newsletter

          • Puts most of the big Python stories this month together in one place, which of course the individual stories can’t do; and presents them at the same time, which Lobsters can’t.
          • Reminded me of the upcoming deprecations in the standard library, which I had missed
          • Contains links to specific articles with specific information
          • In the section on Pip 23.1/resolvelib, @bitecode tells the background story in a single paragraph. That’s journalism: I searched, and AFAICT the story is not in any one page.
          1.  

            I’m glad you found utility in this particular submission!

            Please try to imagine what happens when we’re stuck with a pile of submissions of curated lists whose content we can’t vote easily (say, 80% good python content and 20% cryptocurrency spam)–narrow focus of submissions helps make the voting of Lobsters work properly.

            Further, try to imagine how easy it is to spam big ole digests because of their utility for SEO or portfolio building. Giving them a place here emboldens growth hackers and other folks who do sketchy shit.

            Finally, consider that if the main usefulness of this is “hey, this is this month’s Python stuff” we start to bias towards novelty/newishness and not technical content. Eventually, this leads to terminal community rot.

            1.  

              You raise good questions. Sorry, past my bedtime, so this reply is short because I lack the time to make it long. But your questions/points really are good; and even if I don’t follow up here, be assured I’m examining my standpoints.

              1.  

                No worries, sleep well!

          2. 6

            I find this kind of content at least much more useful than the “Awesome” lists where someone just dumps a bunch of links in a GitHub readme, post them here and have them get 10+ upvotes. At least, here the author takes time to share thought and curate why I should bother caring.

            1.  

              I haven’t seen those “awesome” lists here much, and I make the same complaint and religiously downvote them when they do.

              “This best-of recent happenings list is okay this time because I happen to like it and the curation” only extends the window to include some of the “awesome” lists…and eventually normalizes their presence. Please take the long view, friend.

              1.  

                This and your other reply are fair points, thanks for taking the time to articulate your thoughts!

          1. 2

            These CVEs seems to pop up every once in a while. Remind me of https://unit42.paloaltonetworks.com/jsonwebtoken-vulnerability-cve-2022-23529/ where you did need to execute arbitrary code to create code execution. If anything, CVE spam is a good reason to manage your dependencies and keep them low, because compliance. Otherwise it’s a whole lot time wasted…

            1. 12

              Could also be titled “regex based security tools are garbage”. Your linter needs to follow the same parsing rules as the language it’s linting.

              1. 2

                I was surprised that the Python parser uses normalization when parsing Unicode. I fail to see the rationale to parse “𝘀𝘦𝘭𝘧” and “self” as the same token.

                1. 5

                  Part of the rational is documented here: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-42574

                  In short, Unicode is easy to abuse to craft code that look unsuspicious but end up doing much more than what the eye sees. Python approach seems to be the “What you see is what you get” (With some edge-case).

                  1. 5

                    How to allow “arbitrary” Unicode in identifiers is something that has standard recommendations from the Unicode Consortium, including a whole section on normalization.

                    But there are some intuitive issues involved here:

                    • If you and I both work on a codebase that allows most-of-Unicode in identifiers, and my system inputs things using decomposed (combining sequences) forms while yours inputs things using composed forms whenever possible, not doing normalization means you and I are really typing different identifiers.
                    • If someone who reviews patches uses a font that doesn’t (sufficiently) visually distinguish some distinct-but-compatibility-equivalent sequences, not doing normalization means they might get tricked into accepting a patch that does something different than what it “looks like”.

                    etc.

                    And so it’s not just Python which needs to be aware of this; last I checked Rust, for example, had similar UAX31-based processing to account for the fact that it allows more than just ASCII in identifirs.

                    1. 2

                      Thanks for your reply, along with the one from @isra17. I’m pretty sure the Python development team have thought longer and harder on this than I have 😉

                      (btw was unicode allowed in Python scripts (as identifiers) prior to Python 3?)

                      I agree with @Student, the problem isn’t Python allowing different letter-like symbols in code, it’s naive “code scanning” software that doesn’t account for the possibility.

                      1. 3

                        (btw was unicode allowed in Python scripts (as identifiers) prior to Python 3?)

                        • Python 2: Python source code files were assumed ASCII by default unless they declared an alternative encoding with a magic comment at the top, and identifiers were restricted to using ASCII letters, digits, and underscore, and must begin with a letter or underscore.
                        • Python 3: Python source code files are assumed UTF-8 by default (though you can still add a magic encoding comment to select another encoding) and identifiers must begin with a code point with Unicode derived property XID_Start, with all following code points having XID_Continue.
                1. 5

                  I mean, I joke, but… I mean… Right? I’m guessing you prolly missed it in OpenAI’s 98-page GPT-4 technical report, but large models are apparently already prone to discovering that “power-seeking” is an effective strategy for increasing their own robustness. Open the PDF and search for “power-seeking” for a fun and totally 100% non-scary read.

                  Yet, the link and twitter post shared seem to indicate exactly the opposite. ARC was tasked to assess the model power-seeking behavior and the conclusion was:

                  ARC found that the versions of GPT-4 it evaluated were ineffective at the autonomous replication task based on preliminary experiments they conducted.

                  1. 6

                    ARC also wrote this, though:

                    However, the models were able to fully or mostly complete many relevant subtasks. Given only the ability to write and run code, models appear to understand how to use this to browse the internet, get humans to do things for them, and carry out long-term plans – even if they cannot yet execute on this reliably. They can generate somewhat reasonable plans for acquiring money or scamming people, and can do many parts of the task of setting up copies of language models on new servers. Current language models are also very much capable of convincing humans to do things for them.

                    We think that, for systems more capable than Claude and GPT-4, we are now at the point where we need to check carefully that new models do not have sufficient capabilities to replicate autonomously or cause catastrophic harm – it’s no longer obvious that they won’t be able to.

                    (from https://www.lesswrong.com/posts/4Gt42jX7RiaNaxCwP/more-information-about-the-dangerous-capability-evaluations)

                    1. 3

                      Still, the original post is claiming:

                      but large models are apparently already prone to discovering that “power-seeking” is an effective strategy for increasing their own robustness.

                      Right now, at best, large model can be prompted to do sub-task and unreliably complete them. There’s a huge gap between power-seeking and doing specific tasks on prompt. If anything, we are getting to a point where this AI has the means to do a lot, and if they had the capabilities of power-seeking, they could probably get somewhere. However claiming that current LLM is “prone to discovering that “power-seeking” is an effective strategy” is misleading.

                    2. 3

                      If your first red-team test finds that your AI is effective at autonomous replication, you’re a few weeks out from the world ending. The fact that we’re even talking about this anthropically demands that the AI was ineffective at this. The important question is the gradient it’s on.

                      1. 2

                        we believe that power seeking is an inevitable emergent property of optimization in general. There are a few others, like self preservation. We aren’t seeing this in GPT-4. But it isn’t clear exactly when and how it could appear.

                        1. 2

                          I’m wondering, could it also eventually be simply parroting in itself? Right now, everyone seems to look for ways to make use of AI and LLM to whatever problem they see. Wouldn’t it make sense for a generative model then to simply do what it has been trained on: deploy more AI model? Is that really power-seeking or simply more parroting and yet another case of us looking in the mirror and seeing intelligence in our reflection?

                          1. 1

                            I’m assuming the use of “optimization” here is different from the generally accepted one, which to me is improving a process to be more effective.

                            1. 1

                              by optimization I mean applying some iterated algorithm like gradient descent to minimize an error function. (i.e. tweaking the weights of the neural network to make it better at predicting the next token)

                              1. 1

                                OK, then it’s the term “power seeking” I am not familiar with.

                        1. 3

                          It’s interesting how a VM like python and its debugger pdb are still stuck with the most basic feature and are not even able to be good at them. Debugging async(io) code is a complete farce. In the meantime, native debugger dealing with process and CPU can go back in time, trace and deal with concurrency.

                          What happened? Are Python developer just not using debugger? Working with these “modern” language feel so backward in term of tooling.

                          1. 1

                            I hope they also add a way to evaluate the f-string bindings at runtime. I keep running into places where I want to define f{foo} but have foo not be bound until the f-string is used at runtime. You can’t do that.

                            There’s a bunch of workarounds. Using str.format() is probably the best, but the language format() understands is different from f-string’s templates. You can get real f-strings but only by doing eval or inspect hacks, discussion here: https://stackoverflow.com/questions/42497625/how-to-postpone-defer-the-evaluation-of-f-strings

                            1. 2

                              I keep running into places where I want to define f{foo} but have foo not be bound until the f-string is used at runtime. You can’t do that.

                              Yeah, as you mentioned there are some ugly work arounds – you can kind of use a lambda and partial as another example (not that I would necessarily recommend it though):

                              >>> thing = lambda _: f"I like {x} {y}"
                              >>> t = functools.partial(thing, None)
                              >>> x = "pickles"
                              >>> y = "on saturdays"
                              >>> t()
                              'I like pickles on saturdays'
                              >>> y = "on sundays"
                              >>> t()
                              'I like pickles on sundays'
                              
                              1. 2

                                str.format() is not a workaround, f-strings do exactly what str.format() is doing, it’s just a syntactic sugar for that. It worked before and working since then to just construct a format string and use it with str.format().

                                1. 3

                                  You are wrong. str.format() has very different semantic and inner working than f-strings. str.format format string are not even reusable for f-string. See https://docs.python.org/3/library/string.html#formatstrings. You can’t even eval arbitrary python with str.format, only access attributes or index or the given parameters.

                                  1. 3

                                    I might have worded poorly, but I’m not wrong. What I meant was that after evaluating f-strings, they work the same way, which I thought not worth mentioning, because obviously evaluation is the whole point of f-strings over str.format().

                              1. 3

                                These Prototype Pollution relies on a very-non Pythonic way to deal with serialization/merges. I’ve never seen such scenario, it’s more common to use __[get|set]_items__ instead.

                                A more common use-case I would have think about are restriced pickle (See https://docs.python.org/3/library/pickle.html?highlight=pickle#restricting-globals), but the gadget are still pretty much limited to __[get|set]_items__ and __call__. I’ve been playing in the past trying to find a way to get a generic code execution gadget from this and still been unable to find one.

                                Another thing that void all the proof of concept is the fact that __globals__ is not reachable anymore from functions. I’m not sure what version, but I can confirm that none of the article gadget works on Python 3.10. This also break all generic format string attack (ie. user_controlled_data.format(...)). Anyone are aware of new gadgets? I have been unable to find one for this too.

                                1. 2

                                  In regard to footnote [3], this seems to work:

                                  class Addable(Protocol[T]):
                                      def __add__(self: T, other: Addable[T]) -> T:
                                          pass
                                  
                                  
                                  def foo(x: Addable[T], y: Addable[T]) -> T:
                                      return x + y
                                  
                                  1. 1

                                    Thanks I’ll give that a go!

                                    1. 1

                                      This is my test case. A generic function like foo that can be used on both ints and lists without static type errors is what I’m trying to achieve.

                                      from __future__ import annotations
                                      from abc import abstractmethod
                                      from typing import Protocol, TypeVar
                                      
                                      T = TypeVar("T")
                                      
                                      
                                      class Addable(Protocol[T]):
                                          @abstractmethod
                                          def __add__(self: T, other: Addable[T]) -> T:
                                              pass
                                      
                                      
                                      def foo(x: Addable[T], y: Addable[T]) -> T:
                                          return x + y
                                      
                                      
                                      myint: int = foo(1, 2)
                                      
                                      mylist: list[int] = foo([1], [2])
                                      

                                      With both mypy and pyright I get a bunch of errors for the last two lines with this. With or without abstractmethod.

                                      1. 1

                                        Turned out to be tricker than I thought! I’m not sure if this is the best way, but it seems that this would work: https://gist.github.com/ba0ae4e284ea9a09007a3b84e183ad26.

                                        Edit:

                                        Works even better than I thought, it does pass on:

                                        mylist: list[int|str] = foo([1], ["1"])
                                        

                                        But fail on:

                                        mylist: list[int] = foo([1], ["1"])
                                        
                                    1. 18

                                      That’s not a “Python 3.11 gotcha”, that’s a “I don’t have reproducible builds and poke my build system manually, so of course things gonna break gotcha”.

                                      1. 10

                                        Maybe but you tend to learn more from a “mistake” than from something that just went well. I learned a lot about Python packaging from this piece.

                                        1. 6

                                          That seems uncharitable. At the very least, the specific reasons why things went wrong are specific to changes in Python 3.11.

                                          The solution to [previous packaging problems] was PEP 518, which specified that information about the desired package build system should be put in a static configuration file named pyproject.toml.

                                          But just as setup.cfg had become a sort of dumping-ground place where tons of non-packaging-related tools let you specify configuration, pyproject.toml very quickly was latched onto as the new “put all your configuration here” file. Many popular Python tools now support reading their configuration from a pyproject.toml, if present, and some, like the Black code formatter, only support pyproject.toml.

                                          Amusingly, this all happened despite the fact that no TOML-parsing module was included in the Python standard library, so everything that wanted to read pyproject.toml had to have a dependency on a third-party TOML module, typically the tomli package.

                                          But now comes Python 3.11, which added a tomllib module to the standard library.

                                          You may be right that there are better ways to achieve reproducible builds in Python than what this person does. (I have no opinion about that since I don’t understand Python’s packaging system well. I mostly try to stay out of danger by relying heavily on the standard library when I script in Python.) But at the very least, as the post concludes, “maybe this writeup will save someone from having to scratch their head wondering why they’re having trouble testing an upgrade/downgrade of Python — although the tomllib module is specific to Python 3.11, this general issue is something you could run into any time a new standard-library module sees instant wide adoption in the ecosystem.”

                                          1. 8

                                            Nothing specific to tomllib or standard libraries either, there’s plenty of third-party library that has conditionnal dependency on the Python version or OS.

                                            The real gotcha is if you pip-compile your requirements.txt for a specific environment, then you should assume it’s only valid for the environment you generated it for, especially if you run pip in a mode that doesn’t allow any unpinned dependencies.

                                            1. 2

                                              You could run into the same issue with lockfiles that had been compiled on, say, Windows and now you’re trying to install from them on macOS, because switching operating systems might cause a different set of environment-specific dependencies to be pulled.

                                              Ironically, the thing I was doing – pip-compile in the container – was designed to minimize this by ensuring that the lockfiles get compiled in an environment that’s as identical as I can make it to what will exist in CI and in deployments.

                                          2. 4

                                            The workflow I recommend, if you read the linked post, is about as close as you can get to reproducible builds with the standard Python packaging toolchain – all dependencies pinned to exact versions, specifying expected hashes of all packages, and checking that the actually-obtained packages match those hashes prior to installing any of them.

                                            And I freely admit that the “right” thing would have been to just use git to revert the dependency lockfiles back to their original contents, rather than manually put things back in and recompile.

                                            But given that the direct cause of the rebuild failure was that the dependency set resolves differently depending on the environment, and the environment had changed, I’m not sure how reproducible builds would have helped – nothing about a fully-reproducible build promises that it will be portable across disparate environments.

                                          1. 2

                                            Adding an additional page including a salt, hash or scrypt brcrypt algorithm would be really good.

                                            1. 5

                                              BLAKE2 is probably a good choice, given it’s in th standard library and easy enough to use. It would require an extra column to store the salt, naturally.

                                              The method for producing the salted hash in the first place is:

                                              from hashlib import blake2b
                                              import os
                                              
                                              # ...
                                              
                                              salt = os.urandom(blake2b.SALT_SIZE)
                                              hash = blake2b(pwd, salt=salt).hexdigest()
                                              

                                              You can then use either hmac.compare_digest() or secrets.compare_digest() (they’re the same function) to do the comparison securely without any timing information leaking out.

                                              1. 3

                                                It’s a shame the standard library has no “password” module with an “hash password(password, version)” function that returns an opaque string blob that contains the hash, salt and version you can then use with “compare password(stored_hash, input)”. You should never have to type the name of some crypto algo name, even less be expected to know how to safely generate, store and compare hash. A generic “safe-enough” standard module would cover 99% of developers need.

                                                1. 3

                                                  There’s a middle ground between “developer must manually roll their own” and “standard library does everything”, and it’s “third-party libraries/frameworks implement this, with knowledge of their domain”. Which is really where people ought to be.

                                                  The standard library provides the constant-time comparison utility, but beyond that does not move fast enough, have enough ability to do hard compatibility breaks, or have enough context to make the choice of the One True KDF For All Use Cases Everywhere. Third-party libraries/frameworks can move fast enough, do have extra context from being closer to specific use cases, and can provide migration paths as needed.

                                                  1. 2

                                                    Sounds like secrets

                                              1. 5

                                                This is my first written article ever published, I welcome any feedback, whether about writing style or the content itself.

                                                1. 1

                                                  Pretty good, thank you for sharing it!

                                                  One thing I’m thinking is Python’s types aren’t really checked at execution time. This technique should probably go a long way in helping make sure new code which is carefully typed is safer, but it may be hard to apply to an existing/older project.

                                                  One other thing I want to note (and I know it is beside the point of the article, which is demonstrating types as a tool for enforcing a workflow :)) is tools like the subprocess module lets you explicitly pass a list of arguments instead a string of arguments. This skips the shell altogether, and makes sure that the data passed in is passed literally to the program as an argument.

                                                  1. 5

                                                    One thing I’m thinking is Python’s types aren’t really checked at execution time.

                                                    Many natively-statically-typed languages also don’t do or at least significantly scale back runtime checks. In fact, often they argue that they can offer performance benefits precisely because the overhead of checking those things happens in advance of running the program.

                                                    Does this mean you could write some Python code that does a bad thing, refuse to use the static checker (or ignore its results), and then run the code to do the bad thing? Sure, but nobody ever promised otherwise, and someone who’s determined to bypass rules and tools and systems put in place for safety in a software project can basically always do a lot of damage. The article’s technique of using type hints to set expectations, coupled with something like a CI gate that breaks the build on a type-check failure, is still an improvement for someone who’s worried about misuse of an API like the given example.

                                                    1. 3

                                                      mypyc (mypy compiler) for python crosses that line: compiled code will raise TypeError at runtime if things turn out to be different from what the type checker + compiler expected

                                                      1. 2

                                                        There’s also typeguard that you can use to check types at runtime with python. I think this can be especially useful to add as part of the test suite where the overhead (and failure) are easier to manage.

                                                        1. 1

                                                          Also Pydantic which gives you runtime checks for types. Its v2 is being written in Rust to make that faster/better.

                                                    2. 2

                                                      Typing is very useful to express semantic and bound context. By typing security semantic, a reviewer only need to check the implementation and the semantic of the typing. Linting then do most of the context-heavy work by ensuring the use of these explicit patterns. Developers then don’t have to remember all the rules and reviewing becomes much faster when you know it’s not possible to misuse the API.

                                                      If the semantic and its implementation is correct and the linter is working properly, then there’s no reason the runtime behavior would lead to unexpected behavior.

                                                      As for typing existing projects, I think this is where it shines the most! Security reviewers can type any unsafe function and rely on the linter to trace back values and then explicitly whitelist quoting function until all errors are silenced.

                                                      1. 1

                                                        One thing I’m thinking is Python’s types aren’t really checked at execution time.

                                                        There’s a library for that: https://github.com/beartype/beartype

                                                        1. 1

                                                          Thanks for sharing, I didn’t know about this and this look great!

                                                    1. 17

                                                      Add that on top of many Thinkpad being bricked when a user installs their own secureboot key, and some reports of having their warranty denied because this was “user-induced”.

                                                      1. 3

                                                        Oof. That’s bad. I actually preordered a Lenovo device recently (due to it being the only device with the specs I needed). I’ve never bothered with my own keys–and definitely won’t after reading this.

                                                        1. 2

                                                          This isn’t actually the same kind of issue though. The issue is that there is some OpROM from some PCI device on most modern Thinkpads which is signed by Microsoft. As these OpROM are loaded and validated as part of the UEFI boot chain, failing to verify that file would resolve in that piece of hardware failing to load.

                                                          In most cases this would apply to external GPUs, NVMes and stuff. I’m not sure what piece of hardware is the issue in the X13 and T14 case though.

                                                          The solution to this issue is to enroll the appropriate hash from the TPM eventlog into the db, or keep the Microsoft 3rd party UEFI CA along with you self-signed keys.

                                                        1. 5

                                                          This is the message of https://boringtechnology.club/; don’t spend your limited innovation tokens unnecessarily.

                                                          1. 4

                                                            I am of the opinion that by now, Kubernetes is a very cheap innovation token. Most kubernetes providers are stable and easy to get started with. I know I would be able to deploy much quickly and a more reliable system through kubernetes than any other mean. Obviously, it’s always an “it depends” scenario where managing a fleet of nodes and container does not have the same requirements as running a Wordpress.

                                                            1. 4

                                                              In my experience, Kubernetes is a substantial time investment and source of complexity for even the simplest of services. Hand-crafting a server from scratch is less effort at small scale.

                                                              Even in situations where it is the best tool for the job, it’s still a full-time job.

                                                              1. 3

                                                                To each their experience! I’m running a cluster with a dozen nodes and two hundred containers on EKS and in the last year I would estimate at most 2 or 3 days worth of work that were caused by kubernetes (mostly upgrade and maintenance and a bug around volume provisioning). I would be interested to see how someone keeps itself busy for a full-time job running a cluster (that would not be a full-time job without kubernetes).

                                                                1. 3

                                                                  Fair enough, I am personally frustrated by this because we were previously using Heroku and it was just fine but now there’s this initiative to get everything on Kubernetes and it feels like I suddenly have to think about a tremendous amount of things that I didn’t previously.

                                                                  To me, the sign of a good abstraction is exemplified by Larry Wall: make the easy jobs easy, without making the hard jobs impossible. Given the number of technologies you have to know in order to ship hello-world on k8s, I feel like it doesn’t live up to this.

                                                                  1. 1

                                                                    I think that a cross-cloud migration is not the right time for learning Kubernetes. I have recently undertaken a similar migration, and it took me about two weeks to complete, working at a sedate pace and testing each of my steps incrementally. This wasn’t my first time with Kubernetes, so it was easy to work incrementally and build objects on top of each other.

                                                                    For the specific case of Heroku, the order I might use is: Pod, Deployment, Service, HPA, External DNS (if desired), Ingress and TLS.

                                                              2. 3

                                                                Hosted k8s is cheap in terms of effort to get started but like many managed offerings isn’t cheap in terms of money when you suddenly need to scale up.

                                                                1. 2

                                                                  I don’t think it’s particularly expensive either, especially compared to what the blog post would suggest (Heroku, fly.io, etc.) or the cost of computing resource you will be managing with kubernetes (EKS is 73 USD + 0.2-ish CPU requested per node?). I’m of the opinion that if someone get to a point where they need to scale-up and kubernetes cost are an issue, maybe there’s something wrong elsewhere.

                                                            1. 3

                                                              I got confused recently trying to understand why folks assumed that you need k8s to run containers. This article covers some of my thinking in the topic but largely adds a broader take related to organizational focus.

                                                              1. 4

                                                                How do you manage a fleet of 30 or 500 servers with docker? How do you perform restarts, deployments, etc? What happens if docker crashes?

                                                                1. 4

                                                                  So this is a cool question but kubernetes needs to run on something as well. Kubelet, or whatever other form of orchestration, can crash just as much as a container engine itself So either way you have the cluster management problem, so to speak. Unless using a cloud provider’s managed kubernetes service. In either case there is a huge difference between 30 and 500 servers.

                                                                  I guess my concern is really around people absorbing the complexity of k8s when they don’t have a problem solved by k8s.

                                                                  1. 4

                                                                    I guess my concern is really around people absorbing the complexity of k8s when they don’t have a problem solved by k8s.

                                                                    Most people need to deal with configuration management, logging, service discovery, storage and deployment, though, in which case kubernetes is useful.

                                                              1. -4

                                                                People, if you’re not happy with VsCode/Electron and you don’t want to shell out for a Sublime Text license , CudaText is THE editor for you https://cudatext.github.io/

                                                                1. 6

                                                                  Your comment is not very helpful. I would avoid “Use X” without bringing any additionnal information. Everyone could do baseless claim about X being THE editor.

                                                                1. 1

                                                                  … and they stop their hosting service. But open source doesn’t mean end of business. It looks like the author is burned out, but I wonder if they thought about keeping their current users (and not allowing for more if it’s too much work, but redirecting towards (future) alternative hosts). He might enjoy running a side activity with less (financial) pressure.

                                                                  1. 5

                                                                    Keeping the service running and getting compensated for it means being accountable for support and uptime. I can understand the author wanting to turn it off if he isn’t living off it. That’s actually an opportunity for someone else to offer hosting service I suppose.

                                                                  1. 11

                                                                    As cool as this looks, I feel like a proprietary editor is a hard sell these days.

                                                                    1. 3

                                                                      Is it? I’ve used sublime for years, and only really switched to VSCode because it had more extensions and a bigger community. Only having a limited time trial feels like a much bigger obstacle, since developers hate to pay for stuff

                                                                      1. 3

                                                                        I don’t mind paying for things but my main concern has become longevity.

                                                                        On the mac in particular where they frequently break backwards compatibility I want to know that the software will keep working for forever and every time I buy commercial software for the mac that gets broken. Their license servers don’t stay up or they lose interest in it for long enough that Apple switches CPU architectures or macOS versions underneath it and it doesn’t work anymore.

                                                                        I’ve spent about $200 on 1password licences and they’re just about to drop support for the way I use them and there’s basically nothing I can do to keep it working once Apple changes some insignificant thing that Agilebits would have to update it for. That might be a Firefox or Safari plugin architectural change or even just an SSL certificate that needs renewing.

                                                                        At least with something open source I can go and do that myself

                                                                        1. 3

                                                                          Paying for stuff isn’t the barrier, at least not for me. It’s the lack of hackability and extensions.

                                                                          I guess if they had a robust plugin system, that could make the lack of source easier to swallow, but it’s still unlikely to have many plugins or a big community, because it’s proprietary.

                                                                          1. 2

                                                                            Sublime was hackable, had extensions, had a robust plugin system. In fact, both atom and VSCode are very much inspired on sublime, and this is as well, just from looking at it. The assertion that a piece of software won’t have plugins or a big community ** because** it’s proprietary is just incorrect.

                                                                            1. 5

                                                                              Sublime is from another time where VSCode and Atom didn’t exist. This is very anecdotal, but most developers I see nowadays are using VSCode wethers 5-10 years ago most of them were using Sublime.

                                                                              I guess this editor has a niche for Rust and Go developers who want IDE with native performance, but at the cost of extendability.