What I miss from that times — keyboard layout switching worked reliably and happened instantly. Now it causes loss of focus in current window, long delays (so after pressing “switch layout” keybinding, few typed characters are still in old layout), few keybinding choices such as only ctrl+shift and alt+shift. It’s painful now in all distros and I’m not sure if old functionality (built in into X server I think) can be still used.
Also both latest Gnome and KDE have terrible UI. Gnome tries to treat desktop as tablet computer and KDE has Vista-era shiny plastic look.
This is why I’ve stuck with xfce for so long. I am a bit concerned now that xfce is going to gtk3, I fear it will end up more like gnome 3 than xfce.
I think that’s unlikely. Most things are already ported to Gtk3 and they look exactly like they did on Gtk2.
I don’t understand the hate for Gnome. When you critique Gnome’s UI are you comparing it to the high water mark, best desktop UI you’ve ever experienced, or the the latest iterations of macOS and Windows? Gnome isn’t developed in a vacuum, it is competing with the mainstream commercial desktop environments, which means compromises that negatively affect highly technical users, but results in a product that in some dimensions of UI may still be better than Windows and macOS, which is impressive IMO.
I only dislike its desktop elements, mostly top menu, which consists of strange menu item in left corner and clock in center. There is too much unused space around clock. This is bad UI decision originally implemented on iPad, which has standard mobile phone status bar on top (it originates from “feature phones”, not even iPhone).
GTK3, however, is great (at least on Linux) and I like settings dialogs and Gnone apps.
I appreciate that Gnome is at least trying to do something other than the “yet another Windows 95 clone” that the X11 world is fixated on. (unless it’s a tiling WM…. I wonder UX-oriented desktop oriented around tiling would be like…)
I’m comparing it to Gnome 2 and XFCE, and it fails terribly in this regard.
XFCE is well-liked because it doesn’t try to abandon its user-base in favor of chasing some mass-adoption unicorns.
If mass-adoption of Linux on the desktop ever happens, it will not be caused by Gnome 3 displaying fewer options in their GUI.
Ah I definitely felt similarly when moving from Gnome 2 to Gnome 3. Once I got used to Gnome 3 though, I forgave Gnome. The spotlight is better than macOS. The built-in tiling is good enough. The default Debian themes are classy. The animations are classy and smooth even with integrated graphics. It never freezes or crashes. The best part is all these batteries are included so Gnome requires very few user choices or customization.
I use GNOME 3 on my Linux machine, but I can’t say that I am happy. How do you live without menus? Or system tray icons (to e.g. Dropbox, Keybase)?
I know that there are some extensions that bring these things back, but they tend to reduce stability of GNOME. And with Wayland bugs tend to crash gnome-shell/mutter and log you out of the session completely.
I wonder who they are targeting when they are removing features that have been part of the WIMP paradigm for more than three decades? No one wants big innovation on the desktop, just provide a robust, predictable desktop environment that is up to date with the latest standards (Wayland, Vulkan rendering, etc.).
(Of course, it’s their project and they can do whatever they want to do with it, I just don’t understand the philosophy.)
I use spotlight for everything, it’s brilliant :/
KDE has Vista-era shiny plastic look.
KDE has Vista-era shiny plastic look.
That’s much easier to solve (through the thousands of available themes) than this “Gnome tries to treat desktop as tablet computer”. For example, my own KDE setup looks like this: https://i.imgur.com/8eAze8v.png
Does it crash often? Last time used it it crashed periodically (but that was few years ago).
I haven’t actually had a crash in over a year. It’s become much more stable in the past few months, no more flickering when adding/removing monitors quickly either.
I currently work in fin tech and, prior to that, ad tech. It’s soulless work on the best of days. My only issue is that there doesn’t seem to be many altruistic companies hiring, or at least in SV the signal to noise ratio is so bad they get squelched by all the startups hiring.
Maybe another weworkremotely needs to be made (think goodtechjobs.org) so that we can connect altruistic orgs with engineers who want to make a difference.
Check out Binti. We provide web software to child welfare agencies that makes it easy for members of the public to become foster parents and helps agency staff approve foster families. Based in SF. https://binti.com/binti-careers/
Is it open source or are you really just trying to replace these agencies with your proprietary apparatus?
Thanks for the good questions both of you.
We are in it for the long haul, and have plans to become a B Corporation.
Binti is certainly not replacing child welfare agencies - after all, agencies have entire teams of staff providing services - we’re giving them modern IT to help them do their jobs better.
Right now, Binti’s customers and prospective customers are widely interested in fully-managed SaaS - they are grateful for our operations, security, and compliance expertise - they aren’t seeking to operate their own systems. It still feels really good developing software that helps find kids homes, even though the software is proprietary, not open source.
We do allow the agencies to download their data in standard formats in a self-service manner, so we don’t believe we are introducing any undue lock-in. Also we provide source code escrow that grants our customers a wide license to our software in the event that we severely breach our contract.
I’d love the hear your ideas to reduce risk for our customers in the case of acquisition or going out of business, as well as ideas to harness capitalism to benefit the little guy, building an organization that gets many of the benefits of capitalism while causing the least harm.
I am sorry that you do not see that by making proprietary tools you effectively replace the internal know-how of the public sector and make it more vulnerable to attacks from the private sector in the long run.
I have seen accounting departments that are no longer able to function without consulting the private supplier. And it gets worse every year as the accountants leave. I do not expect anything else here.
But don’t stress it much. Somebody will eventually rewrite the stuff, put it out in the open and drive you out of business. It’s cheaper.
That’s a good point. The software startups that survive either IPO into Wall St control or get acquired by companies that didn’t get big playing nice. Most of them like lock-in and scheme on people. So, if it’s not FOSS or a non-profit, there’s a good chance that what’s helping child welfare agencies now might become something causing them problems later. That’s the kind of risk I’d never want to happen.
Also, @gkop, do note you can charge money for GPL’d software so long as you provide source on request or just have it in an FTP directory somewhere. You get the market share by branding, networking, and execution in general. There’s lots of also-rans in FOSS to proprietary apps whose companies just out-marketed and out-executed them.
Thanks Nick, replied as sibling!
Already posted, by you, 6 days ago:
Sorry, I didn’t know that Lobste.rs discouraged this. Thanks for explaining rather than just downvoting!
I’ve encountered errors like the one’s mentioned here and I’ve never even rolled by own CSV code before.
It’s actually a pretty terrible “standard.”
There is an actual CSV standard, namely RFC 4180. So it’s not quoted-standard. Whether producers and consumers follow the standard is a different matter.
Writing your own CSV parser isn’t that hard though. This post pretty much tells you all you need to know, it’s extremely straightforward to write a parser that handles all of it. If you’ve ever written a basic lexer for a programming language with strings you’ve done more than a CSV parser.
It’s straightforward if your users are cool with your parser barfing on their malformed inputs. Lexer users expect it to barf. Not CSV users.
I have a T460s and it seems like a solid machine overall. Granted, I’ve had it for .. uh, about half a year?
I wouldn’t vouch for the quality of its keyboard though. The scissor mechanism is really quite flimsy and cheap, and if you’re unlucky, you might find a “sticky” key. Oh, and the trackpad sucks for small motions. Like when you land the cursor right next to a tiny link you intend to click, moving your finger slowly results in no cursor movement at all until at some point when it suddenly jumps over the link. There’s probably a software workaround waiting to be undiscovered.
I would like it more if it had easily replaceable & extensible batteries. Yes, it has two batteries..
It’s hard to give a more informative comment since I’ve no idea what sort of things you expect from a developer laptop.
I would like it more if it had easily replaceable & extensible batteries.
Its predecessor the T450s had such batteries, and it’s awesome. The T460s introduced the tapered, wedge-shaped lower body (like an X1 or MacBook Air); it’s thinner and lighter and overall an excellent machine, but unfortunately without the swap-able battery.
In general http://www.notebookcheck.net/ is a great site for checking out laptops.
I can recommend the Dell XPS 15, I used it as my work laptop for the past 2 years with Linux Mint 17 and was very happy with it. I now purchased a Dell XPS 13 as my personal couch/travel laptop and am also somewhat happy with it. Build quality for both is superior though.
Minor down point is that the most powerful (especially more RAM) versions for the XPS 15 came with a glare 4k touch display that I have absolutely no use for. Similar for XPS 13.
The XPS 13 sadly is the first Linux laptop I ever had trouble with which is weird as it is the only officially Linux supported I ever bought. Wifi only worked after package upgrades (good that I had an adapter for wired connection around), plus need to deactivate some stuff in BIOS. Plus the USB-C to VGA/HDMI adapter they sell does not work with Linux… the one I bought only works for HDMI. So, be aware.
As I just got back from researching laptops here are a few others:
Personally I prefer laptops without a dedicated graphics card, I have a desktop for that, it saves weight and the switching between internal and dedicated GPU is still sub par in Linux (in Linux Mint switching is built in but you have to log out/in)
Personally I prefer laptops without a dedicated graphics card
Ditto. There’s also the issue of dedicated graphics often being a point of faiure (see, eg, the GeForce 8600M issues of a few years ago). Intel video support is also often less troublesome when using Linux/*BSD systems.
The last couple generations of Intel graphics are very impressive indeed. More than adequate for a development workstation (for example, they can push > 10 million pixels 3D-accelerated). And great battery life and pretty good Linux drivers.
The universal SSL thing is egregious. You just can’t take the company seriously when they allow this so nonchalantly. Browsers should mark these endpoints the same as plaintext endpoints.
Unless I’m misunderstanding, you’re specifically reading from the login keychain, which is unlocked by logging in, as the name suggests. In my experience using Linux desktop environments with password managers, the contents of the login keychain is considered public to any process running under the user account. The data has necessarily been decrypted already.
That’s exactly it: The login keychain makes it very easy to securely and conveniently store credentials for users by storing them encrypted & unlocking on login, but all that means that you must lock your screen while you’re not using your computer.
That Apple isn’t making it easy to lock a mac’s screen out of the box is a pretty big problem.
Ctrl-shift-eject or ctrl-shift-power, although you’re right that a menu item would help that quite a bit.
Just wanted to note that ctrl-shift-power is kind of a crappy combo because the power button is all the way up there in the top-right corner of the keyboard. Also if you have a MacBook setup with an external non-Mac keyboard (IE, a keyboard without an eject key), this combo requires that you reach for the power button on your MacBook. Of course you can use third-party tools, AppleScript, or custom keyboard firmware to set up an alternate mapping, but that’s tedious and we shouldn’t expect regular users to have to do that.
Gnome uses super-l to lock the screen, which doesn’t have these issues, and is also easy to remember because of the mnemonic “l for lock”.
Windows uses the same “windows+L” combination to lock, and my keyboard actually has an explicit “lock” key combination (Fn+F2, which generates XF86ScreenSaver) that I’ve configured for the purpose. Screen locking is one of those fundamental keyboard shortcuts that should get pride of place, not relegated to the bucket of forgettable two-modifier shortcuts.
There is an entire menu. Applications > Utilities > Keychain Access > Preferences… > General > Show keychain status in menu bar.
Edit: I should probably mention that choices in the menu include “lock screen” and “lock keychain”.
It’s extremely easy using “Hot Corners” - my MBP goes to a password-protected screensaver immediately once I move the cursor to the upper-right corner of the screen.
You could argue that this should be enabled out of the box I suppose, but it’s really quite trivial to do.
God no, I loathe hot corners. The keyboard shortcut is far easier.
This is the right answer. It’s not a “security flaw,” the flaw is the access control given to said “attacker.” If you’ve let someone on your system, they can easily get your keys. That’s why one should use guest accounts/limited privileges when other people are to use the system.
When I’ve logged into my debian user account I’d never let someone touch my system.
This is a pretty nice primer.
I wish Docker was more polished. In practice, I’ve found it buggy and brittle and altogether annoying to use as a development environment. I do not share the rosy attitude of the author :(
It gets worse - wait until you add foreign key and not null constraints. In order to have fast tests then, I’m convinced you need to avoid most of ActiveRecord and FactoryGirl.
If you’re going to go through all this trouble, wouldn’t the merge commit that is created if you Merge the PR in GitHub’s web interface also annoy you?
Current project workflow after CircleCI is done and the code review was done:
Rebase on Master
Squash branch into one commit.
Force push branch. (<- this rings my alarm bells)
git merge –ff-only my-branch.
Prime directive is: Don’t touch the green merge button ever.
Turning on the TypeScript “noImplicitAny” check, which requires all types to be explicit (or inferred), means putting a lot of type annotations in Angular 2 codebase where they are currently missing.
I don’t understand why more linters (or linter-editor integrations, more precisely) aren’t source control-aware, only linting the diff from the trunk branch. Pronto is a linting framework for the Ruby ecosystem that lints “incrementally” and I find it useful for guiding the codebase in a healthy direction and documenting style in an executable format, while avoiding mass style-only patch sets that add noise to the git history. Unfortunately I haven’t found an easy way to integrate Pronto with vim, and am too lazy to run it on my local command line, so I usually just let it run against my pull requests, which is a slower feedback cycle. It seems to me the holy grail would be incremental feedback in editor with a matching CI step going off the same configuration. Is this complete package available in another ecosystem?
Code is read more than it’s written. Having standards that apply only to code written since time t seems like a bad idea - if you joined the codebase after t, you’d be very surprised when a function you called violated that standard.
For sure. Then do you fix file-by-file as you touch them, or lint the whole codebase in one fell swoop?
And how do you develop your standards in such a way as they may evolve but not requiring you to reformat the whole codebase every time you tweak the standards?
And do you have any tricks to use eg. Git blame but ignoring style patches by default?
The whole codebase.
Reformat the whole codebase every time. If that’s hard, develop the tools to make it easy.
I don’t generally make a lot of use of blame, but the right thing here would be to have it use something more granular than line diff, I think.
I’d rather encourage evolving our style at the expense of the codebase being consistently formatted. It’s not that reformat is difficult (well it is in Ruby, but that’s another problem), but a codebase-wide format requires stopping integration while the team switches to the new style.
a codebase-wide format requires stopping integration while the team switches to the new style.
It can do but it doesn’t have to. If you have a reliable reformat tool you can just have everyone run it on their branches at the same time (or at least, before merging). If the VCS causes problems with this, let’s fix them.
Matz is behind the times.
One thing to keep in mind is that I think Matz has a different cultural perspective than many people in the community. I do think he took the suggestion/idea of a CoC seriously, but damn that conversation got out of control really quickly – I don’t want to reignite that here by any means.
As a non-discriminated-against person (you may now preemptively discount my opinion…), I’ve always considered the Ruby community to be diverse and welcoming - take a look at pictures from ANY Ruby conference and all sorts of people are represented by appearances alone. Young, old, male, female, trans, punks, suits, asians, central americans, martians (they are among us!), I’ve seen all these people (mostly) getting along together at the 5 different ruby conferences I’ve been to.
My hope is that people can see this as progress rather than a slap in the face, since this is such a sensitive topic. I really don’t think there existed a winning move to make everybody happy, and the whole debacle put Matz in a really tough spot. He’s a pretty good steward of the ruby community, and I hate for something like this to galvanize people because of what was decided upon. I don’t think this should paint Matz in a negative light.
Any large community will have some bad actors, but I genuinely do not believe that they are stemming from the actions of the Ruby core team. At least this CoC makes it clear that they intend for all people to be treated with respect, and harassers are not welcome.
It’s really not this complicated. The people speaking up for a CoC with teeth were the people really involved in the community and contributors to the ecosystem. Those against a real CoC were Rubyists less engaged with the community or plain trolls. It would have been just fine for Matz to alienate the latter group.
I am being downvoted “incorrect”; this is a really straightforward (though rather tedious) point to substantiate so here goes:
In favor of either a strong CoC or strongly in favor of any CoC (in order of appearance in the bug thread):
In the end these important people were either against any CoC or explicitly against a strong CoC:
Sorry if I missed someone important in the community. Note my lists contain only the people who I could verify are engaged in the Ruby community on GitHub.
When you ignore the anonymous, pseudonymous, newly registered, troll, and “nobody” accounts, it’s clear that the people most involved in Ruby are in favor of a real CoC. I am disappointed that Matz and commenters here don’t see that.
Ignoring that most of the names you posted are Ruby community members as opposed to Ruby core contributors, and ignoring that the two buckets you divided people into are not mutually exclusive (e.g., one can be for a CoC, but against the Contributor Covenant)…
The real debate – buried under hundreds of trash comments, to be sure – was about CoC “strength”. Namely:
The CoC that Coraline Ehmke initially proposed, her Contributor Covenant, is “stronger” in these ways than Matz felt was appropriate. Matz chose to use a CoC based on the Postgres CoC, which does not burden project leadership with the responsibility of responding to harassment outside of the Ruby project’s collaboration infrastructure, and which does not pin project leadership to a predetermined procedure for dealing with community members in violation of the CoC.
These features don’t make the CoC less “real”, but they do give the project leadership a great deal more freedom in choosing how to handle bad apples. Don’t paint it as Matz being against a CoC; that’s disingenuous and -1, Incorrect.
I was with you up until the last sentence; gkop didn’t say that. No blame - it’s easy to react to what we hear rather than what was said, especially on emotional, controversial topics.
The scope and prescriptive strength of a CoC are important topics and often get buried in the noise, which I can well imagine happened here; it’s useful to have this explanation that draws the signal out. This clarifies Matz’s position substantially, and thank you for it.
I appreciate your explaining the finer points, but there’s really no need to call me disingenuous.
My words were “In the end these important people were either against any CoC or explicitly against a strong CoC”. The choice Matz made was against a strong CoC.
It is rather convenient that you picked your sets to lump him in with the “against any CoC” folks.
Do the other four people really fit “against any CoC”? Soulcutter labeled Evans and Borenszweig as “MVP” and Kosaki and Naruse as “Other”.
I really like Matz and don’t wish to denigrate him. However I think he was wrong here and his decision reflects his being out of touch with the community. He does belong on the list reflecting the minority viewpoint among bona fide members of the Ruby community.
Please stop using “the community” instead of “some parts of the community”.
Clearly some folks (myself included) in the Ruby community are quite in agreement with Matz.
My premise is that among the community that matters Matz substantiated the minority opinion. That’s why I went through the trouble of making the lists that I made.
I would think the “any” crowd would be happy with this CoC, if only as a start. Nothing says it’s written in stone. I don’t necessarily believe that pro-Contributor Covenant people will be upset by this CoC. So I went through your list to try to classify which people were pro-CC vs ‘any’ and here’s the breakdown I got reading the comments of the people in your list:
(4) MVP (not strong)
(7) Any CoC
(8) Contributor Covenant
CoC with Enforcement (not Contributor Covenant)
edit: classifying people into broad categories is hard, many of the people listed expressed nuanced opinions that are not shown here. If I misrepresented anybody, please correct me!
Thanks for this. The MVP category is not really helpful though, particularly since Matz' initial proposal was stronger than what he wound up instituting.
I thought “MVP” sounded better than “weak” or “not strong”. Goes to show you that people can categorize things differently based on their perspectives :) You said:
it’s clear that the people most involved in Ruby are in favor of a real CoC. I am disappointed that Matz and commenters here don’t see that.
I think my list illustrates that it’s not clear, actually. I genuinely think you can see it both ways here, and I appreciate your detailed response.
I do agree about Matz' initial proposal - but the one settled upon is more in the vein of MVP which is why I put him in that category.
Agreed. I appreciate your and @gamache’s digging into the substantial differences of opinion.
Several of those people called for Matz to be deposed, merely because he wanted to come up with a reasonable solution that wasn’t theirs. One might cynically argue that it is easier to get status by deposing the dictator than it is to come up with a language and maintain it for two decades.
There was plenty of bad-faith to go around.
I don’t disagree with you on these points.
Is the part where the CoC encourages respecting others' viewpoints that seems outdated to you, or the part where personal attacks are discouraged?
I honestly am puzzled (yet sadly not surprised) that somebody could take offense to a simple, brief, and human code of conduct.
The overarching point really resonates with me. Near the beginning of our hiring process I explain to the candidate all the steps in the process and why each step exists. I acknowledge the process isn’t perfect and express gratitude for the candidate bearing with our imperfect process. I also ask the candidate if she has any questions or concerns before we move on to the next step. The vast majority of feedback on the process I receive from candidates is positive, and good candidates do persevere to the end.
Title is misleading. By “You Can’t Fix It” the author means “You Can’t Get an Interview Process with 100% true positive rate and 100% true negative rate”. OTOH, they conclude that you can improve your process.
Generally when we talk about fixing compound things rather than individual issues, we mean improving them. For example, fixing up one’s home does not mean making it perfect, but making it better. “I got my car fixed” does not mean my car is now perfect, it means that there existed some issues and I had the relevant ones resolved. On the flipside, fixing an issue in the bugtracker means that the issue is, y'know, gone (except when you use everyone’s favorite fix: WONTFIX). In some sense that is “perfect”, but within a very limited, tiny scope.
I would feel bad about being so pedantic about this, but the title comes off as almost clickbait in the sense that “and You Can’t Fix It” is just begging for attention, only to switch the bait with an unconventional-but-still-defensible definition of “Fix”
EDIT: Not trashing the content of the piece though; it is actually a pretty solid list of issues with common interview strategies.
Good point. I prefer the title “All processes are broken,” despite it giving inadequate context.
Generally when we talk about fixing compound things rather than individual issues, we mean improving them.
I’m not sure. A lot of rhetoric about “X is broken and the way to fix it is Y” takes fixed versus broken as binaries. Lots of start-up, in fact, bill themselves are the way to fix hiring. Saying essentially that there are problems here but no magic way to fix them is taking a somewhat different view I’d say.
Cloudflare Flexible SSL really rubs me the wrong way; it basically renders the padlock useless since the data has been on the internet in plaintext.
The next level up from a regular padlock is the EV certificate which is much more expensive than a regular certificate. So unless I pay a lot of money for an EV certificate, my customers typically can’t distinguish my actually-secure site from a competitor’s fake-secure site.
Should Cloudflare be ashamed of themselves for enabling this?
Is there anything a browser could do to indicate the difference, short of graylisting all Cloudflare sites?
Yes, it’s wildly irresponsible of them, and their certificates probably should be graylisted for it. But remember that other sites can make the same mistake. Maybe you connect to a site over HTTPS, but then it sends your data in plaintext over the internet to Redis. All HTTPS can validate is that you’re talking to who you think you’re talking to, it can’t stop them passing on your data to the internet at large.
Rails has supported foreign keys out of the box since I believe 4.0. The reason it took Rails so long and that it’s not the default is not due to performance, but because one of Rails' key goals from early on was to be “database-agnostic”.
I’m a Linode customer. I am definitely sticking with Linode after this incident; they’re going to be extra careful from now on - more careful than vendors without recent security incidents.
That’s the fifth time this happened to them. How careful do you expect them to get? There is also a claim that the hack happened in July and was only now disclosed. DDoS is the least problem here.
Other comments explain why the Ruby version directive is awesome. Here are some reasons why the other features are:
Packaging :git and :path Dependencies – Makes it a snap to vendor all your gems for when your target machines have no internet.
Local git repos – Lets you develop a project and the gems it depends on in parallel without having to change the Gemfile for the project that depends on the gems or push the gems unnecessarily.
Thanks team bundler!