1. 1

    Thank you for the planet. There seems to be about 100 blogs/feeds coming in to the planet. But the planet rss feed is just 100 items, most of which seem to come from just a couple of blogs that don’t have proper timestamps?

    1. 2

      Well spotted.. it wasn’t apparent yesterday but I just fixed an SSL problem and suddenly there are quite a few. I’ll remove any more I spot, but please, feel free to go crazy on pull requests :)

      edit: this is way more broken than I thought. Planet doesn’t seem to do anything about feeds that lack timestamp, which is surprising. Anyone got a recommendation for better software? The main value in this existing thing is the Travis setup and the list of feed URLs.

      edit: ok, I /think/ I’ve got it this time.. some bad settings in there, and squelching untimestamped feeds doesn’t happen after the first time they’re seen, so had to wipe the cache and start again

      1. 1

        I’m tempted to write something better, or at least help improve what you have currently got working :)

        1. 1

          I’ve once authored a planet generator named Uranus, but I don’t really maintain it anymore. It does have the advtange of not having any dependencies other than Ruby, though (no gems, just plain stdilb). There’s another planet generator named Pluto that is still maintained.

      1. 6

        Thanks - I was going to suggest doing something similar but didn’t get around to even making a suggestion :( Perhaps it can be made “official” and we could have a planet.lobste.rs?

        1. 6

          I think there may be room for it! Although I note Planet itself is starting to show its age quite badly. Still hard to beat the simplicity, and with a little theming and e.g. setting a max length on the articles, things might start to look very nice.

          Can’t hate on Planet too much, I was setting this up for private use before I realized it might be worth sharing. The fact Planet is still a go-to RSS reader is quite impressive given its vintage!

          1. 2

            Good point about the age of Planet - I’ve not looked around seriously for a replacement myself but a few of the alternatives are also a bit long in the tooth. Moonmoon looks to be maintained, although it’s written in PHP rather than Python.

        1. 4

          I write a mixture of shoot-from-the-hip spam and technical deep dives, usually after too much coffee, and often with at least some tangential relation to Python. Very occasionally I’ve accidentally broken some big stories. For the past year I’ve been documenting progress on developing Mitogen and its associated Ansible extension.

          https://sweetness.hmmz.org/

          Favourite tech: Guerilla optimization for PyPy, Deploying modern apps to ancient infrastructure, Fun with BPF

          Favourite rants: I will burn your progress bar on sight, Data rant

          1. 2

            I learned of this project some time ago when you posted about it in one of the “what are you working on” posts. Since then I’ve been waiting to use this since I only really use ansible twice a year, and on those opportunities, I have to work remotely over a very slow and convoluted connection (bouncing through multiple hosts and traveling down a home DSL connection and then a PtP WiFi link in someones garage). When I use ansible there are time constraints, the infra in the garage is only online for so long every weekend, so ansible runs that take 40 minutes or longer are super annoying (especially when this is the developing and testing stage). This project looks promising to me, I plan on using it very soon for my work and I hope to see improvements to my productivity as a result, thanks!

            1. 1

              Its network profile has “evolved” (read: regressed!) a little since those early days, but it should still be a massive improvement over a slow connection. Running against a local VM with simulated high latency works fine, though I’ve never ran a direct comparison of vanilla vs. Mitogen with this setup.

              That’s a really fun case you have there – would love a bug report even just to let me know how the experience went.

              edit: that’s probably a little unfair. Roundtrips have reduced significantly, but to support that the module RPC has increased in size quite a bit. For now it needs to include the full list of dependency names. As a worst-case example, the RPC for the “setup” module is 4KiB uncompressed / 1KiB compressed, due to its huge dependency list. As a more regular case, a typical RPC for the “shell” module is 1177 uncompressed / 631 bytes compressed.

              A lot of this is actually noise and could be reduced to once-per-run rather than once-per-invocation, but it requires a better “synchronizing broadcast” primitive in the core library, and all my attempts to make a generic one so far have turned out ugly

            1. 3

              It’s probably way out of the intended scope, but could Mitogen be used for basic or throwaway parallel programming or analytics? I’m imagining a scenario where a data scientist has a dataset that’s too big for their local machine to process in a reasonable time. They’re working in a Jupyter notebook, using Python already. They spin up some Amazon boxes, each of which pulls the data down from S3. Then, using Mitogen, they’re able to push out a Python function to all these boxes, and gather the results back (or perhaps uploaded to S3 when the function finishes).

              1. 3

                It’s not /that/ far removed. Some current choices would make processing a little more restrictive than usual, and the library core can’t manage much more than 80MB/sec throughput just now, limiting its usefulness for data-heavy IO, such as large result aggregation.

                I imagine a tool like you’re describing with a nice interface could easily be built on top, or maybe as a higher level module as part of the library. But I suspect right now the internal APIs are just a little too hairy and/or restrictive to plug into something like Jupyter – for example, it would have to implement its own serialization for Numpy arrays, and for very large arrays, there is no primitive in the library (yet, but soon!) to allow easy streaming of serialized chunks – either write your own streaming code or double your RAM usage, etc.

                Interesting idea, and definitely not lost on me! The “infrastructure” label was primarily there to allow me to get the library up to a useful point – i.e. permits me to say “no” to myself a lot when I spot some itch I’d like to scratch :)

                1. 3

                  This might work, though I think you’d be limited to pure python code. On the initial post describing it:

                  Mitogen’s goal is straightforward: make it childsplay to run Python code on remote machines, eventually regardless of connection method, without being forced to leave the rich and error-resistant joy that is a pure-Python environment.

                  1. 1

                    If it are just simple functions you run, you could probably use pySpark in a straight-forward way to go distributed (although Spark can handle much more complicated use-cases as well).

                    1. 2

                      That’s an interesting option, but presumably requires you to have Spark setup first. I’m thinking of something a bit more ad-hoc and throwaway than that :)

                      1. 1

                        I was thinking that if you’re spinning up AWS instances automatically, you could probably also configure that a Spark cluster is setup with it as well, and with that you get the benefit that you neither have to worry much about memory management and function parallelization nor about recovery in case of instance failure. The performance aspect of pySpark (mainly Python object serialization/memory management) is also actively worked on transitively through pandas/pyArrow.

                        1. 2

                          Yeah that’s a fair point. In fact there’s probably an AMI pre-built for this already, and a decent number of data-science people would probably be working with Spark to begin with.

                  1. 17

                    You’d save yourself a lot of trouble upfront not borrowing the filezilla name - it’s trademarked. Already there’s an argument for whether “-ng” postfix constitutes a new mark, why bother even having it. Just completely rename it

                    Hilariously their trademark policy seems to prohibit their use of their own name

                    1. 3

                      Oh, great point. We will need to think of a new name.

                      How about godzilla-ftp.

                      1. 14

                        How about filemander? It’s still in the same vein as “zilla,” but far more modest. The fact that you’re refusing cruft, provides a sense of modesty.

                        Also, “mander” and “minder” — minder maybe isn’t exactly right for an FTP client, but it’s not completely wrong…

                        1. 4

                          filemander

                          Great name! A quick ddg search does not show any existing projects using it.

                          1. 1

                            And it sounds a bit like “fire mander”, which ties in well with the mythological connections between salamanders and fire.

                            1. 1

                              Yeah, the intention was to have a cute salamander logo–way more modest a lizard than a “SOMETHINGzilla!”

                          2. 8
                            1. 5

                              Just remember to make sure it’s easy for random people to remember and spell. They’ll be Googling it at some point.

                          1. 2

                            The USR1 signal is supported on Linux for making dd report the progress. Implementing something similar for cp and other commands, and then make ctrl-t send USR1 can’t be too hard to implement. Surely, it is not stopped by the Linux kernel itself?

                            1. 8

                              SIGUSR1 has an nasty disadvantage relative to SIGINFO: by default it kills the process receiving it if no handler is installed. 🙁 The behavior you really want is what SIGINFO has, which is defaulting to a no-op if no handler is installed.

                              • I don’t want to risk killing a long-running complicated pipeline that I was monitoring by accidentally sending SIGUSR1 to some process that doesn’t have a handler for it
                              • there’s always a brief period between process start and the call to signal() or sigaction() during which SIGUSR1 will be lethal
                              1. 1

                                That’s interesting. The hacky solution would be to have a whitelist of processes that could receive SIGUSR1 when ctrl-t was pressed, and just ignore the possibility of someone pressing ctrl-t at the very start of a process.

                                A whitelist shouldn’t be too hard to maintain. The only tool I know of that handles SIGUSR1 is dd.

                              2. 5

                                On BSD it’s part of the TTY layer, where ^T is the default value of the STATUS special character. The line printed is actually generated by the kernel itself, before sending SIGINFO to the foreground process group. SIGINFO defaults to ignored, but an explicit handler can be installed to print some extra info.

                                I’m not sure how equivalent functionality could be done in userspace.

                                1. 1

                                  It would be a bit hacky, but the terminal emulator could send USR1 to the last started child process of the terminal, when ctrl-t is pressed. The BSD way sounds like the proper way to do it, though.

                                  1. 4

                                    I have a small script and a tmux binding for linux to do this:

                                    #!/bin/sh
                                    # tmux-signal pid [signal] - send signal to running processes in pids session
                                    # bind ^T run-shell -b "tmux-signal #{pane_pid} USR1"
                                    
                                    [ "$#" -lt 1 ] && return 1
                                    sid=$(cut -d' ' -f6 "/proc/$1/stat")
                                    sig=$2
                                    : ${sig:=USR1}
                                    ps -ho state,pid --sid "$sid" | \
                                    while read state pid; do
                                            case "$state" in
                                            R) kill -s"$sig" "$pid" ;;
                                            esac
                                    done
                                    
                                    1. 4

                                      Perfect, now we only need to make more programs support USR1 and lobby for this to become the default for all Linux terminal emulators and multiplexers. :)

                              1. 10

                                This post is pure fluff. It hints at Linux preferring throughput over latency in some cases, but fails to give a single concrete example of that being true. It’s reminiscent of the popular “BSD vs. Linux” arguments I heard (and sadly accepted as gospel) in the late 90s

                                1. 1

                                  The user still has to allow the website USB access to the device and if they are one of the people who own these USB calls they are probably smart enough not to allow it.

                                  1. 12

                                    This comment contains content of such life-changing awesomeness that we must request your permission before revealing it to you. If you dare, click “Accept” in the dialog now displayed at the top of the window.

                                    I’ve been interested in all things infosec for around 20 years at this point, and I /still/ regularly get those permission dialogs wrong, and the problem isn’t me, the problem is that the dialogs exist at all. Nobody can be expected to get them right, and even when present the behaviour they gate should never be game-ending as in WebUSB

                                    1. 11

                                      Unless:

                                      • There’s some other vulnerability that combines with this
                                      • Some killer consumer device uses WebUSB, dramatically increasing the number of users
                                      • You’re drunk or sleep deprived or distracted
                                      • The malware exploits a dark pattern
                                      1. 9

                                        they are probably smart enough not to allow it.

                                        Such users empirically do not exist.

                                        1. 4

                                          [citation needed]

                                          At quick glance, I found this study: https://pdfs.semanticscholar.org/4c40/c0ea6b02630839658ba7939dd609c621bf17.pdf

                                          Popular opinion holds that browser security warnings are ineffective. However, our study demonstrates that browser security warnings can be highly effective at preventing users from visiting websites: as few as a tenth of users click through Firefox’s malware and phishing warnings. We consider these warnings very successful.

                                          People do react to unknown notifications. (The study goes on and talks about the efficiency of such notifications related to their design)

                                          Sure, at enterprise scale, that still means something is going through, so you might want to deploy your browser with appropriate policies which deny such request every time if you want.

                                          1. 1

                                            Oh, neat link!

                                            That said, with a million users only a tenth is still a hundred thousand.

                                            1. 1

                                              Sure, but that problem we have on so many levels.

                                              For individuals, it’s protection, for cohorts, less so.

                                      1. 1

                                        I don’t understand how it’s possible pick three here: “full-native speed”, single address space OS (everything in ring 0) and security. I believe you can only pick two.

                                        1. 1

                                          Well, that’s what nebulet is trying to challenge.

                                            1. 1

                                              I haven’t yet read the whole paper but in the conclusion they say that performance was a non-goal. They “also improved message-passing performance by enabling zero-copy communication through pointer passing”. Although I don’t see why zero-copy IPC can’t be implemented in a more traditional OS design.

                                              The only (performance-related) advantage such design has in my opinion is cheaper context-switching, but I’m not convinced it’s worth it. Time (and benchmarks) will show, I guess.

                                              1. 1

                                                When communication across processes becomes cheaper than posting a message to a queue belonging to another thread in the same process in a more traditional design, I’d say that that’s quite a monstrous “only” benefit.

                                                I should have drawn your attention to section 2.1 in the original comment, that’s where you original query is addressed. Basically the protection comes from static analysis, a bit like the original Native Client or Java’s bytecode verifier

                                          1. 3

                                            Terminal within vim now?

                                            From the article:

                                            The main new feature of Vim 8.1 is support for running a terminal in a Vim window. This builds on top of the asynchronous features added in Vim 8.0.

                                            Pretty cool addition. :-)

                                            1. 17

                                              Neovim has had this for over a year now. Neovim has been pretty great for pushing vim forward.

                                              1. 5

                                                I wonder if the new Vim terminal used any code from the NeoVim terminal. I know NeoVim was created in part because Bram rejected their patches for adding async and other features.

                                                1. 7

                                                  I have to say, I really don’t care to see this in a text editor. If anything it’d be nice to see vim modernize by trimming features rather than trying to compete with some everything-to-everybody upstart. We already had emacs for that role! I just hope 8.2 doesn’t come with a client library and a hard dependency on msgpack.

                                                  Edit: seems this was interpreted as being somewhat aggressive. To counterbalance that, I think it’s great NeoVim breathed new life into Vim, just saying that life shouldn’t be wasted trying to clone what’s already been nailed by another project.

                                                  1. 6

                                                    Neovim isn’t an upstart.

                                                    You can claim that Vim doesn’t need asynchronous features, but the droves of people running like hell to more modern editors that have things like syntax aware completion would disagree.

                                                    Things either evolve or they die. IMO Vim has taken steps to ensure that people like you can continue to have your pristine unsullied classic Vim experience (timers are an optional feature) but that the rest of us who appreciate these changes can have them.

                                                    Just my $.02.

                                                    1. 2

                                                      Things either evolve or they die.

                                                      Yeah, but adding features is only one way to evolving/improving. And a poor one imho, which results in an incoherent design. What dw is getting is that one can improve by removing things, by finding ‘different foundations’ that enable more with less. One example of such path to improvement is the vis editor.

                                                      1. 1

                                                        Thanks, I can definitely appreciate that perspective. However speaking for myself I have always loved Vim. The thing that caused me to have a 5 year or so dalliance with emacs and then visual studio code is the fact that before timers, you really COULDN’T easily augment Vim to do syntax aware completion and the like, because of its lack of asynchronous features.

                                                        I know I am not alone in this - One of the big stated reasons for the Neovim fork to exist has been the simplification and streamlining of the platform, in part to enable the addition of asynchronous behavior to the platform.

                                                        So I very much agree with the idea that adding new features willy nilly is a questionable choice, THIS feature in particular was very sorely needed by a huge swath of the Vim user base.

                                                        1. 6

                                                          It appears we were talking about two different things. I agree that async jobs are a useful feature. I thought the thread was about the Terminal feature, which is certainly ‘feature creep’ that violates VIM’s non-goals.

                                                          From VIM’s 7.4 :help design-not

                                                          VIM IS… NOT design-not

                                                          • Vim is not a shell or an Operating System. You will not be able to run a shell inside Vim or use it to control a debugger. This should work the other way around: Use Vim as a component from a shell or in an IDE.
                                                          1. 1

                                                            I think you’re right, and honestly I don’t see much point in the terminal myself, other than perhaps being able to apply things like macros to your terminal buffer without having to cut&paste into your editor…

                                                    2. -1

                                                      Emacs is not as fast and streamlined as Neovim-QT, while, to my knowledge, not providing any features or plugins that hasn’t got an equivalent in the world of vim/nvim.

                                                      1. 7

                                                        Be careful about saying things like this. The emacs ecosystem is V-A-S-T.

                                                        Has anyone written a bug tracking system in Vim yet? How about a MUD client? IRC client? Jabber client? Wordpress client, LiveJournal client? All of these things exist in elisp.

                                                        1. 3

                                                          Org mode and magit come to mind. Working without magit would be a major bummer for me now.

                                                  1. 5

                                                    Aiming to keep my word and have a non-alpha release of Mitogen for Ansible out today. I don’t think anyone really cares, but slips have a habit of snowballing, so I’m forcing myself to the original crowdfunding schedule as much as possible.

                                                    Naturally it is the worst day to bump into a design problem in a task that should have taken 30 minutes, but also it was the most obvious day it would happen on :)

                                                    1. 1

                                                      I’m glad I stumbled across your comment, this project looks really cool and is very useful to me, also recommended to my buddies at the org I volunteer for :)

                                                    1. 4

                                                      Hunting for remaining issues in Mitogen’s recently implemented fork support. Pretty upset to discover it’s somehow taking 8ms, that should be closer to 500 usec :S Sampling profilers are useless at this time scale. Currently thinking about some approach involving gdb’s reverse debugging or similar.. need to capture behaviour of 2 processes containing 2 threads each. After I’m done with that, using the new support to add better isolation for Ansible modules that leak state or monkeypatch stuff

                                                      1. 2

                                                        Hi, sorry I missed this. :) There is still some behind the scenes combobulation in progress that prevents me from writing an update. In short, looks like at least twice as much funding is available as is indicated on the Kickstarter page. Should have an update in the coming day or two. As usual, the tech parts turn out to be the easy part!

                                                        1. 3

                                                          I just backed this project on Kickstarter. If it can be made to work like it promises, it would be a huge productivity boost for me on several projects. Currently with Deps, I bake an image with Packer and Ansible for every new deployment (based on a golden image). That has been getting a bit slow, so I was looking at other deployment options. Having super fast Ansible builds would be great, and make that not as necessary.

                                                          1. 2

                                                            Hi Daniel, I keep forgetting to reply here – thanks so much for your support! For every neat complementary comment I’ve been receiving 5 complex questions elsewhere. I’ve just posted a short update, and although it is running a little behind, it looks like the campaign still has legs. I’m certainly here until the final hour. :) Thanks again!

                                                          1. 8

                                                            Avoiding commercial work by rallying as many early pledges for a crowdfunding campaign I’m attempting tomorrow – to fund a piece of free software :) Likelihood of success is probably under 1%, but I’ve never seen a better project to try it out with, so going to give it a whirl.

                                                            I hadn’t expected quite so much upfront planning to be required. It’s nice to say it’s “risk free”, but in reality, the danger of overcommitting when there is actual money involved, that is immensely stressful, and just the planning alone has been a significant sunk cost. Whether it works out or not, I’m not sure I’d try again.

                                                            1. 2

                                                              The support for standalone parsers is really tantalizing! I wasted quite some time on this problem a few years back, for a bug in the Salt devops tooling that appears to remain unfixed to this day.

                                                              If you’re looking for potent demonstrators of your library (and 9k GitHub stars can’t be wrong ;)), it might be worth your time investigating Salt:

                                                              I’ve long forgotten what the original bug was, however at the time, things such as prefixing the expression with “not” were totally broken. I wouldn’t be surprised if that’s still the case. That single bug is hardly representative, minion filtering is a core part of the tool, and there are probably hundreds of bugs logged or otherwise fixed by a real parser

                                                              1. 2

                                                                As almost everything I manage at work is handled with Ansible, this is fantastically wonderfully delightfully adverbially-infusedly amazing.

                                                                Here’s more background details on this project, and why it would be a revolution over the current state of Ansible: http://pythonsweetness.tumblr.com/post/165366346547/mitogen-an-infrastructure-code-baseline-that

                                                                It’s rare that efforts like this can offer orders of magnitude better performance, but it seems within reach given what the developer has posted already.

                                                                I’m now seriously considering getting involved in the development / testing of Mitogen.

                                                                1. 2

                                                                  I just pushed v2 of the ansible proof of concept plugin, it’s now down to the target of one request/response per call to an already uploaded playbook module, but presently it’s doing horrible things to work around Ansible’s layering. Still, 4.7 seconds for 25 steps against a local VM :) Still no magic for handling sudo or non-.py modules but those parts are easy. The future is bright.

                                                                  Check out examples/playbook/

                                                                1. 3

                                                                  That’s not the only place such tricks are possible in /proc. You can create “abstract namespace” UNIX sockets (those whose .sun_path begins with ‘\0’), embedding newlines into the name. The result is the ability to insert arbitrary new lines into /proc/net/stat/unix (IIRC). Sorry, noticed this a decade ago, can’t remember which file precisely

                                                                  1. 3

                                                                    Hi @pushcx, can you possibly update the rule to include indication of any working eligibility for particular countries. The majority of “remote OK” US jobs don’t hire outside the US.

                                                                    1. 1

                                                                      Great idea. How would you phrase this? I don’t feel like I know hiring well enough to use the right phrase to convey this succinctly.

                                                                      1. 2

                                                                        Maybe something like: “location with indication if remote is OK and where it’s OK” would be a good start?

                                                                        There could be also a few simple examples of a good offer:

                                                                        Foo Software Ltd., TCL/TK developer for accounting software, remote OK anywhere in the world except France, 2.4kg of gold monthly.

                                                                        1. 2

                                                                          I like @hawski’s concision, but a more explicit treatment may be:

                                                                          Please include any remote working restrictions such as time zone, contract type and eligibility - many companies can only make permanent hires within their home country, which is problematic for overseas candidates.