If you’re already using emacs, nothing beats magit IMO. Extremely powerful, extensible (i.e. magithub for Github integration and in active development after a successful Kickstarter campaign by maintainer Jonas Bernoulli. Highly recommeded!
Agree. I think magit ranks up pretty close to my favorite piece of software ever. Everything is so efficient it improved the way I commit code. Rewriting is so nice.
I recently switched from a 2011 Macbook Air to a X1 Carbon 5th Gen and needed a replacement for 1Password. thankfully @jcs had already handled this for me with bitwarden-ruby - https://jcs.org/2017/11/17/bitwarden & https://github.com/jcs/bitwarden-ruby
I’m not a fan of ruby and thought this would be a perfect project to write in Go since it can be a single binary and would simplify deployment/installation. I got the majority of it working and will post it up this week when I get it cleaned, tested and documented a bit better. Thanks again to @jcs and Kyle Spearrin for doing the actual hard parts. :)
Same here. I actually switched to the betas when 58 starting being the nightlies. Only issue for me was hangouts, but my company recently switched away from hangouts so its not a problem anymore.
My issues is that WebExtensions are not as powerful as older ones. Now it’s all “chromey” in it’s limitations.
I’m curious why it’s a performance win. I would think spinning up an isolated JS virtual machine for each extension would be significantly more expensive and slower than the old compiled extensions.
Old extensions weren’t compiled. The new ones don’t get their own JS VM. Performance win here is likely by cutting of old, crufty, synchronous APIs (mostly internal, but was hard to remove if used by lots of popular addons). This is easier once you declare them legacy.
It was previously the case that a poorly written add-on could slow down all facets of Firefox in general. Now that the only way to hook into Firefox’s internals are via well-defined and optimized APIs, this should happen much less often.
It also allows the firefox devs to iterate quickly without worry of breaking extensions as there is a defined interface for extensions that they need to worry about.
I have two questions about that:
One, I want the same theme capability as I’ve always had. I want Firefox to look like it does for me now, not like the stock Firefox. Is that possible?
Two, I want ad blocking and script blocking and all the other privacy-enhancing add-ons to work as well, not like they do in Chrome where the bad stuff is fundamentally still loaded, it’s just hidden at some point in the rendering cycle. Is that possible?
You can still manually edit userChrome.css. Complete Themes are not supported in >= 57.
Blocked stuff is not “fundamentally still loaded”, not even in Chrome I think?!? E.g. Privacy Badger here returns {cancel: true} in an onBeforeRequest interception handler. IIRC the “just hidden” stuff is from very early days of Chrome extensions
For addons, the answer is yes. See the Privacy add-on collection or other featured extensions
Your look and feel question is hard to answer, without knowing what Firefox looks like to you now. :) If you insist that tabs should be round, it’s not going to be easy, but possible.
I insist that tabs go below the address bar, like they did in in the original Firefox and like they do now with the right add-on: https://addons.mozilla.org/en-US/firefox/addon/classicthemerestorer/
Some people want easy access to the benefits of containerization such as: resource limits, network isolation, privsep, capabilities, etc. Docker is one system that makes that all relatively easy to configure, and utilize.
Docker is one system that makes me wish Solaris Zones took off, which had all of that, but without the VM.
Docker hasn’t used LXC on Linux in a while. It uses its own libcontainer which sets up the Linux namespaces and cgroups.
This is the correct answer. It’s a silly question. Docker has nothing to do with fat binaries. It’s all about creating containers for security purposes. That’s it. It’s about security. You can’t have security with a bunch of fat binaries unless you use a custom jail, and jails are complicated to configure. You have to do it manually for each one. Containers just work.
security
That is definitely not why I use it. I use it for managing many projects (go, python, php, rails, emberjs, etc) with many different dependencies. Docker makes managing all this in development very easy and organized.
I don’t use it thinking I’m getting any added security.
I don’t use it thinking I’m getting any added security.
The question was “Why would anyone choose Docker over fat binaries?”
You could use fat binaries of the AppImage variety to get the same, and probably better organization.
Maybe if AppImages could be automatically restricted with firejail-type stuff they would be equivalent. I just haven’t seen many developers making their apps that way. Containers let you deal with apps that don’t create AppImages.
Interesting. So in effect you wish to “scope” portions for “protected” or “limited” use in a “fat binarie”. As opposed to the wide open scope implicit in static linking?
So we have symbol resolution by simply satisfying an external, resolution by explicit dynamic binding (dynload call), or chains of these connected together? These are all the cases, right?
We’d get the static cases handled via the linker, and the dynamic cases through either the dynamic loading functions or possibly wrapping the mmap calls they use.
That sounds genuine.
So I get that its one place, already working, to put all the parts in one place. I buy that.
So in this case, it’s not so much Docker as Docker, as it is a means to an end. This answers my question well, thank you. Any arguments to the contrary with this? Please?
This answers my question well, thank you. Any arguments to the contrary with this? Please?
While I think @adamrt is genuine, I’m interested in seeing how it pans out over the long run. My, limited, experience with Docker has been:
I suspect the last point is going to lead to many “we have this thing that runs but don’t know how to make it again so just don’t touch it and let’s invest in not touching” situations. People that are thoughtful and make conscious decisions will love containers. People inheriting someone’s lack of thoughtfulness are going to be miserable. But time will tell.
Well these aren’t arguments to the contrary but accurate issues with Docker that I can confirm as well. Thank you for detailing them.
I think there’s something more to it than that. On Solaris and SmartOS, you can have security/isolation with either approach. Individual binaries have privileges, or you can use Zones (a container technology). Isolating a fat binary using ppriv is if anything less complicated to configure than Zones. Yet people still use Zones…
I thought it was about better managing infrastructure. Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it). When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.
What’s the current state in security versus VM’s on something like Xen or a separation kernel like LynxSecure or INTEGRITY-178B?
Correct. It is unclear the compartmentalization aspect of containers to security specially.
I’ve implemented TSEC Orange Book Class B2/B3 systems with labelling, and worked with Class A hardware systems that had provable security at the memory cycle level. Even these had intrusion evaluations that didn’t close, but at least the models showed the bright line of where the actual value of security was delivered, as opposed to a loose, vague concept of security present as a defense here of security.
FWIW, what the actual objective that the framers of that security model was, was program verifiable object oriented programming model to limit information leakage in programming environments that let programs “leak” trusted information to trusted channels.
You can embed crypto objects inside an executable container and that would deliver a better security model w/o additional containers, because then you deal with issues involving key distribution w/o having the additional leakage of the intervening loss of the additional intracontainer references that are necessary for same.
So again I’m looking for where’s the beef instead of the existing marketing buzz that makes people feel good/scure because they use the stuff that’s cool of the moment. I’m all ears for a good argument for all this things, I really am, … but I’m not hearing it yet.
Thanks to Lobsters, I already met people that worked in capability companies such as that behind KeyKOS and E. Then, heard from one from SecureWare who had eye opening information. Now, someone that worked on the MLS systems I’ve been studying a long time. I wonder if it was SCOMP/STOP, GEMSOS, or LOCK since your memory cycle statement is ambiguous. I’m thinking STOP at least once since you said B3. Do send me an email to address in my profile as I rarely meet folks knowledgeable about high-assurance security period much less that worked on systems I’ve studied for a long time at a distance. I stay overloaded but I’ll try to squeeze some time in my schedule for those discussions esp on old versus current.
thought it was about better managing infrastructure.
I mean, yes, it does that as well, and you’re right, a lot of people use it just for that purpose.
However, you can also manage infrastructure quite well without containers by using something like Ansible to manage and deploy your services without overhead.
So what’s the benefit of Docker over that approach? Well… I think it’s security through isolation, and not much else.
Docker itself runs on binary blobs of priveleged or kernel code IIRC (dont use it).
Yes, but that’s where capabilities kicks in. In Docker you can run a process as root and still restrict its abilities.
Edit: if you’re referring to the dockerd daemon which runs as root, well, yes, that is a concern, and some people, like Jessie Frazelle, hack together stuff to get “rootless container” setups.
When I pointed out its TCB, most people talking about it on HN told me they really used it for management and deployment benefits. There was also a slideshow a year or two ago showing security issues in lots of deployments.
Like any security tool, there’s ways of misusing it / doing it wrong, I’m sure.
According to Jessie Frazelle, Linux containers are not designed to be secure: https://blog.jessfraz.com/post/containers-zones-jails-vms/
Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.
If you believe @bcantrill, he claims that the container revolution is driven by developers being faster, not necessarily more secure.
According to Jessie Frazelle, Linux containers are not designed to be secure:
Out of context it sounds to me like you’re saying “containers are not secure”, which is not what Jessie was saying.
In context, to someone who read the entire post, it was more like, “Linux containers are not all-in-one solutions like FreeBSD jails, and because they consist of components that must be properly put together, it is possible that they can be put together incorrectly in an insecure manner.”
Oh sure, I agree with that.
Secure container solutions existed long before Linux containers, such as Solaris Zones and FreeBSD Jails yet there wasn’t a container revolution.
That has exactly nothing (?) to do with the conversation? Ask FreeBSD why people aren’t using it as much as linux, but leave that convo for a different thread.
That has exactly nothing (?) to do with the conversation?
I’m not sure how the secure part has nothing to do with the conversation since the comment this is responding to is you saying that security is the reason people use containers/Docker on Linux. I understood that as you implying that was the game change. My experience is that it has nothing to do with security, it’s about developer experience. I pointed to FreeBSD and Solaris as examples of technologies that had secure containers long ago, but they did not have a great developer story. So I think your believe that security is the driver for adoption is incorrect.
Yes. Agree not to discuss more on this thread, … but … jails both too powerful and not enough at the same time.
Generally when you add complexity to any system, you decrease its scope of security, because you’ve increased the footprint that can be attacked.
Surf, skydive and Street Fighter 5. Not very good at any of them :) Also always planning the next travel spot.
Is there a reason this is getting down votes? Did I submit it wrong or did you not like the interview or something else? Just curious.
It’s an exact duplicate of another post. There is a mechanism in place to ensure that exact duplicate urls aren’t posted, but there isn’t insurance against the way your url was different, outside of downvotes for “Already Posted”.
Is there no way for a poster to see the breakdown of why their post is downvoted? That seems, other than to force people to consider why they downvote, to be one point of selecting the reason, right? I’m curious.
Terminal within vim now?
From the article:
Pretty cool addition. :-)
Neovim has had this for over a year now. Neovim has been pretty great for pushing vim forward.
I wonder if the new Vim terminal used any code from the NeoVim terminal. I know NeoVim was created in part because Bram rejected their patches for adding async and other features.
I have to say, I really don’t care to see this in a text editor. If anything it’d be nice to see vim modernize by trimming features rather than trying to compete with some everything-to-everybody upstart. We already had emacs for that role! I just hope 8.2 doesn’t come with a client library and a hard dependency on msgpack.
Edit: seems this was interpreted as being somewhat aggressive. To counterbalance that, I think it’s great NeoVim breathed new life into Vim, just saying that life shouldn’t be wasted trying to clone what’s already been nailed by another project.
Neovim isn’t an upstart.
You can claim that Vim doesn’t need asynchronous features, but the droves of people running like hell to more modern editors that have things like syntax aware completion would disagree.
Things either evolve or they die. IMO Vim has taken steps to ensure that people like you can continue to have your pristine unsullied classic Vim experience (timers are an optional feature) but that the rest of us who appreciate these changes can have them.
Just my $.02.
Yeah, but adding features is only one way to evolving/improving. And a poor one imho, which results in an incoherent design. What dw is getting is that one can improve by removing things, by finding ‘different foundations’ that enable more with less. One example of such path to improvement is the vis editor.
Thanks, I can definitely appreciate that perspective. However speaking for myself I have always loved Vim. The thing that caused me to have a 5 year or so dalliance with emacs and then visual studio code is the fact that before timers, you really COULDN’T easily augment Vim to do syntax aware completion and the like, because of its lack of asynchronous features.
I know I am not alone in this - One of the big stated reasons for the Neovim fork to exist has been the simplification and streamlining of the platform, in part to enable the addition of asynchronous behavior to the platform.
So I very much agree with the idea that adding new features willy nilly is a questionable choice, THIS feature in particular was very sorely needed by a huge swath of the Vim user base.
It appears we were talking about two different things. I agree that async jobs are a useful feature. I thought the thread was about the Terminal feature, which is certainly ‘feature creep’ that violates VIM’s non-goals.
From VIM’s 7.4
:help design-notI think you’re right, and honestly I don’t see much point in the terminal myself, other than perhaps being able to apply things like macros to your terminal buffer without having to cut&paste into your editor…
Emacs is not as fast and streamlined as Neovim-QT, while, to my knowledge, not providing any features or plugins that hasn’t got an equivalent in the world of vim/nvim.
Be careful about saying things like this. The emacs ecosystem is V-A-S-T.
Has anyone written a bug tracking system in Vim yet? How about a MUD client? IRC client? Jabber client? Wordpress client, LiveJournal client? All of these things exist in elisp.
Org mode and magit come to mind. Working without magit would be a major bummer for me now.
https://github.com/jceb/vim-orgmode