It’s been a while since I’ve used Gnome 3 but I remember preferring Unity to it at the time. Not really a big deal for me since most of my “window management” is done inside Emacs and Firefox.
Ubuntu’s entry into phones reminds me a lot of Mozilla’s. Who was asking for this, and why did it have to come at the expense of their desktop offering?
For me Unity seemed to worked better out of the box than Gnome and I’ve been using it daily for years now. If it’s solid and just works, I really just forget about it.
I wonder if they will change the Gnome shell to look like Unity ? I mean the menu bar always at the top, the window buttons on the left…
I used Gnome as my daily driver for a few years, and I really liked it. I’d also imagine that Ubuntu will modify and/or extend Gnome-shell to make it more Unity-like.
I wonder what this means for Mir, their Wayland competitor?
It would be a shame if they dumped 4 years of time and money into Mir when they could have been dumping it into Wayland.
From the Ars article at https://arstechnica.com/information-technology/2017/04/ubuntu-unity-is-dead-desktop-will-switch-back-to-gnome-next-year/:
By switching to GNOME, Canonical is also giving up on Mir and moving to the Wayland display server, another contender for replacing the X window system.
Those are specifically mentioned in the article as something they plan to keep:
The choice, ultimately, is to invest in the areas which are contributing to the growth of the company. Those are Ubuntu itself, for desktops, servers and VMs, our cloud infrastructure products (OpenStack and Kubernetes) our cloud operations capabilities (MAAS, LXD, Juju, BootStack), and our IoT story in snaps and Ubuntu Core.
Snaps are designed to run on distros other than ubuntu, so they’re pretty much completely independent of Unity.
It’s probably safe to assume that any Ubuntu project unrelated to Unity will continue development.
They already dumped upstart and now Unity, why not Mir? If there are advantages of Mir in contrast to Wayland please let me know because I know very little about the differences of both display servers (protocols).
I am also pretty happy about the anouncement because it looks like canonical won’t continue with developing an alternative solution for everything in house, instead they will take the effort to improve an already existing solution which will hopefully be advantageous for anyone using Gnome and not running Ubuntu. With other words, this decision is good news for the Linux desktop.
They’re giving up Mir and moving to Wayland.
The problem is not that LetsEncrypt issued those certificates. It’s that we taught people that they should look for the green lock to tell whether a website is legitimate.
Well, it is.
If the green lock is right next to https://we-steal-from-your-paypal-account.mysite.com, then the site really is we-steal-from-your-paypal-account.mysite.com,.
I think he means that people think that if the green lock is there, that the website is somehow more “trustworthy”. When in fact, it’s just a measure of connection security. So someone sees the green lock on https://we-steal-from-your-paypal-account.mysite.com and thinks that means it is “trustworthy”. Most people I know have no idea what the green lock means other than “it’s good to look for when I online bank”.
Most modern browsers do attempt to also convey information the owner of the site in the URL bar, where available, and distinguish that from the connection security status. A green lock on its own means just that the connection is secure (but the site could be anything), while a green lock with text next to it, like “? JPMorgan Chase and Co. (US)”, which is what shows for me in Firefox and Chrome when I visit Chase, conveys that the connection is secure and the site has also been authenticated by the CA as owned by “JPMorgan Chase and Co. (US)”. I think many users are likely unaware of how to interpret these distinctions, though.
This is not browser specific. The ownership information is shown for EV certificates(“extended validation”). Let’s Encrypt offers DV certs only, which means all they verify is that the person requesting the cert really owns the domain.
I agree. If anything, I’d say it makes more sense for browsers to take on this role rather than the CAs. Perhaps browsers could warn users if they’re about to send secrets to a site with a domain that contains or is a misspelling of one of Alexa’s top 500 domains.
This certainly isn’t a perfect solution. It might not even be a good one. But I don’t think a CA filtering which domains are allowed is a good solution either.
I never taught anyone that. I taught them it means that 3rd parties can’t eavesdrop on their conversation with that server.
Wow, longer lasting, faster charging, noncombustible, better operation at lower tempuratures, and more environmentally friendly materials.
However, it’s important to remember there’s a lot of work left to be done before you buy a phone with one of these in it. I imagine new manufacturing processes will need to be develeped before these can be produced at scale in a way that is cost effective.
For Haskell, Learn You a Haskell for a Great Good (mentioned in the link) is excellent! Free digital version and pretty easy to follow. After reading that book, I was able to re-create a clone of Doom in Haskell, occasionally following along with https://github.com/levex/haskell-doom.
For passwords, my company uses Team Password Manager (http://teampasswordmanager.com/?o=FOOTER). It’s not free (there is a free trial available), but it works well in environments where people are joining and leaving projects often, and I trust that it’s secure.
Thank you for posting this. I work around a lot of RF equipment, but I only know DSP at the highest and most basic level. I’m hoping this will help me gain a working knowledge of it.
I feel the same way - which is probably why I also found this interesting.
Another resource I really liked is Practical Signal Processing. It is also a practitioner focused treatment of the material, with enough theory to make you dangerous. It’s been a big help for me in understanding the DSP components of GNU Radio flow-graphs. It doesn’t necessarily cover details on the internal implementations of different processing stages but is great to come up to speed on discrete steps in a processing pipeline.