For school, we have to think about entrepreneur projects related to blockchain. Every single time we find a nice idea, either it has already be done, or the blockchain technology is irrelevant for that idea (which can be done without). Our group hasn’t advanced for 2 months now.
A hammer looking for a nail - A lesson I have heard before is to look for a problem with real value that hasn’t been solved, this doesn’t seem to be taking that approach.
success in most things is about managing the expectations of others. say no more often. ask the manager to prioritize.
Unfortunately I am not a great expert, still gathering wisdom myself. Though i hope to hear what others can suggest.
I wrote an article on finding ideas. Essentially, it is important to find problems and treat them as opportunities, rather than finding solutions first and the problems they solve later.
Maybe start from problems: look for markets for lemons, adverse selection, agency costs. As a rough rule of thumb, any market in which someone can earn a commission. And focus really tightly - rather than land titles on the blockchain, attack mineral or oil rights. Look up what people are suing each other over and you know what corner cases to handle.
Things that might be useful:
What’s wrong with doing something that’s already been done? Unless you’re doing research, there’s usually room for more than one interpretation on how to solve a problem.
That’s actually a good point. YC often says don’t worry if someone has thought of your idea already. Just beat them in execution. Tech history is littered with better ideas or bad implementations of similar ones that lost to better executed and/or marketed ideas.
Although I warn it might be unpopular, you might want to try something similar in concept but not quite blockchain. The benefits of the blockchain without necessarily being one. Here’s a few I’ve heard or was pushing that may or may not be implemented by a startup by now:
Transactions are done with traditional databases that use a distributed ledger to tally up final results. This is similar to what banks already do where most transactions are hidden in their databases with some big chunks of money moved between banks. It works.
Instead of just a coin in the ether, Clive Robinson on Schneier’s blog suggested creating a financial instrument that is tied to a number of commodities or other currencies in such a way that it remains really stable. As in, not a speculator’s tool like Bitcoin. I found one company that did this with several currencies plus carbon credits. I just can’t remember name.
Instead of miners, might again use a low-cost technology for transactions but people need an account with the service to participate that costs about a dollar or so a month (or yearly). Kind of like with texts, they buy blocks of transactions. The providers are non-profits with chartered protections with the provisions or exchange being where the new tech comes in to provide accountability.
I’d do a combination of these if I entered the market. I’m not planning to right now. So, sharing the ideas with others in case someone wants to have a try at it while money is raining from the sky on those that utter the words “blockchain” or “decentralized.” ;)
magit+org mode+tramp are my trio of reasons to use emacs. Super happy I converted from vi/vim, it can be a curve at times but well worth it.
I personally find markdown-mode a great, and underrated markdown editor too. It’s not quite a drop-in org-mode replacement, but if one has to write markdown, even nonstandard with TeX-math extentions, it’s the best choice I know of, that’s at least reasonably lightweight. And AucTeX is also always mentionable, if one uses LaTeX.
yeah, helm is one of those amazing things too. I wish more things had helm like interactions. with spacemacs and evil mode it makes moving from vim much easier.
I use ivy mode personally, helm was a bit too slow for my taste. Ivy mode is super nice, way lighter weight than helm.
Same here. I actually switched to the betas when 58 starting being the nightlies. Only issue for me was hangouts, but my company recently switched away from hangouts so its not a problem anymore.
My issues is that WebExtensions are not as powerful as older ones. Now it’s all “chromey” in it’s limitations.
I’m curious why it’s a performance win. I would think spinning up an isolated JS virtual machine for each extension would be significantly more expensive and slower than the old compiled extensions.
Old extensions weren’t compiled. The new ones don’t get their own JS VM. Performance win here is likely by cutting of old, crufty, synchronous APIs (mostly internal, but was hard to remove if used by lots of popular addons). This is easier once you declare them legacy.
It was previously the case that a poorly written add-on could slow down all facets of Firefox in general. Now that the only way to hook into Firefox’s internals are via well-defined and optimized APIs, this should happen much less often.
It also allows the firefox devs to iterate quickly without worry of breaking extensions as there is a defined interface for extensions that they need to worry about.
I have two questions about that:
One, I want the same theme capability as I’ve always had. I want Firefox to look like it does for me now, not like the stock Firefox. Is that possible?
Two, I want ad blocking and script blocking and all the other privacy-enhancing add-ons to work as well, not like they do in Chrome where the bad stuff is fundamentally still loaded, it’s just hidden at some point in the rendering cycle. Is that possible?
You can still manually edit userChrome.css. Complete Themes are not supported in >= 57.
Blocked stuff is not “fundamentally still loaded”, not even in Chrome I think?!? E.g. Privacy Badger here returns {cancel: true} in an onBeforeRequest interception handler. IIRC the “just hidden” stuff is from very early days of Chrome extensions
For addons, the answer is yes. See the Privacy add-on collection or other featured extensions
Your look and feel question is hard to answer, without knowing what Firefox looks like to you now. :) If you insist that tabs should be round, it’s not going to be easy, but possible.
I insist that tabs go below the address bar, like they did in in the original Firefox and like they do now with the right add-on: https://addons.mozilla.org/en-US/firefox/addon/classicthemerestorer/
Interesting, does Swift have safety guarantees, or is it “much more likely to be safe” like Go or C++?
Swift is basically an ML variant, but has some backwards-compatibility stuff and some auto-unwrapping of optionals syntax that may decrease safety vs, say, SML. YMMV
I should have said memory safety: http://www.pl-enthusiast.net/2014/07/21/memory-safety/
Parsers in C are notorious for having memory safety issues. It’s basically guaranteed that any sufficiently complicated parser in C will have memory safety problems.
Here’s one I found in Brian Kernighan’s awk:
https://github.com/andychu/bwk/blob/master/test-results/asan.log
Java and Python are safe. C++ is not but it helps you more than C. Go helps you too, but I’m pretty sure there are some memory safety issues. So I was wondering where Swift stands.
EDIT: some info about Go and memory safety: https://insanitybit.github.io/2016/12/28/golang-and-rustlang-memory-safety
Given this definition Java is also memory unsafe - you can crash jvm with data race. Since this is crash and not unhandled null pointer exception I would assume that given enough time it’s possible to exploit that in more interesting ways.
It’s not fully safe because they still want to allow you to be able to do some fringe stuff but the main path and idiomatic code is memory safe by default using constructs like if let the_variable = some_optional {/*use the unwrapped the_variable here*/}
you can force memory unsafe by force unwrapping it with ! like let somevariable = some_optional_returning_function()! but that can crash if some_optional_returning_func is null.
I don’t believe so. fromJust looks like it throws an error if the maybe item is nothing(i only played around with haskell years ago and i am no expert on it though). the if let ... { pattern is used all the time as a guard on values. you can also chain the if let values together to get one block with all the values you need guaranteed to be non null.
like
if let x = y,
let z = x.something(),
let w = someRandomoptional(),
let stringrep = w as String? {
// only called if all the values above are non-null.
// guard statements are useful too and are put at the top of the function
// to early exit if the function can't deal with the null values.
print(x)
print(z)
print(w)
//...
}
This is a conditional binding for the duration of the scope of the block. the nullability of objects in swift are very important. you can also call items conditionally too.
let foo = bar() // bar returns an optional object
foo?.setVal("xyz") // will not crash if foo is nil.
// syntactic sugar for
if foo {
foo.setVal("xyz")
}
Sorry, I should have quoted the part I was referring to:
let somevariable = some_optional_returning_function()!
The force unwrapping operator ! will crash if the value if nil, similar to how fromJust crashes on None.
If it crashes on null, then that’s considered memory safe behavior. C is unsafe because dereferencing null is undefined. The program can use the value at address zero, or anything else.
I googled and found this:
https://developer.apple.com/swift/blog/?id=28
A primary focus when designing Swift was improving the memory safety of the programming model. There are a lot of aspects of memory safety
So my takeaway that it’s like Go or C++ – more likely to be safe but not guaranteed like Java or Python.
Yes, it’s technically possible to use pointers and Swift is in fact fully interoperable with C, but it is not the path of least resistance. A pointer and its related operations are encapsulated in a struct of the type UnsafeMutablePointer, where:
You are responsible for handling the life cycle of any memory you work with through unsafe pointers to avoid leaks or undefined behavior.
To address your first comment, I didn’t use any of those unsafe pointers in the implementation I’m writing in Swift while the original C++ parser is a jumble of moving pointers, so yes I expect the Swift version to be safer.
As a default the swift compiler will not let you do bad things unless you ask to.
you can declare things as explicitly unwrapped from optional sources. so something like a link to an object in a window would be typically explicitly unwrapped which means that you are guaranteeing that the value will never be null and you are smarter than the compiler. (if the value of a linked storyboard component was ever null it would be a problem) another time they are used if you are sure that the value of it will not be null before using it but you don’t want to set the value of it at initialization.
They designed swift so you can do anything that c can do including bit tweaks and pointer wrangling but it’s a much, much safer paradigm where you do have to go off the rails and make explicit choices to subvert your application. It does make parsing json data more annoying but more safe.
it depends on what you mean. Their philosophy is to deliberate and make a decision on each potential safety issue that achieves a good balance between performance, convenience and safety.
E.g., it forces you to handle all potential nils explicitly in your code but it doesn’t do anything about array access at compile time. But if you go out of bounds your program will crash (I think it does bounds checking at runtime and deliberately crashes it to avoid undefined behaviour).
Sorry I should have said memory safety (see sibling comment). As long as it crashes on null pointers and OOB, that is memory safe. Whereas a C program can just keep going and do whatever.
I think the takeaway here is a) don’t confuse all kind of errors with a http request with invalid tokens (I’m not familiar with the Github API, but I suppose it returns 503 unauthorized correctly) and b) don’t delete important data, but flag it somehow.
It returns a 404 which is a bit annoying since if you fat finger your URL you’ll get the same response as if a token doesn’t exist.
https://developer.github.com/v3/oauth_authorizations/#check-an-authorization
Invalid tokens will return 404 NOT FOUND
I’ve since moved to using a pattern of wrapping all external requests in objects that we can explicitly check their state instead of relying on native exceptions coming from underlying HTTP libraries. It makes things like checking explicit status code in the face of non 200 status easier.
I might write on that pattern in the future. Here’s the initial issue with some more links https://github.com/codetriage/codetriage/issues/578
Why not try to get issues, and if it fails with a 401, you know the token is bad? You can double check with the auth_is_valid method you’re using now…
That’s a valid strategy.
Edit: I like it, I think this is the most technically correct way to move forwards.
Then there’s your problem. Your request class throws RequestError on every non-2xx response, and auth_is_valid? thinks any RequestError means the token is invalid. In reality you should only take 4xx responses to mean the token is invalid – not 5xx responses, network layer errors, etc.
I think the takeaway is that programmers are stupid.
Programs shouldn’t delete/update anything, only insert. Views/triggers can update reconciled views so that if there’s a problem in the program (2) you can simply fix it and re-run the procedure.
If you do it this way, you can also get an audit trail for free.
If you do it this way, you can also scale horizontally for free if you can survive a certain amount of split/brain.
If you do it this way, you can also scale vertically cheaply, because inserts can be sharded/distributed.
If you don’t do it this way – this way which is obviously less work, faster and simpler and better engineered in every way, then you should know it’s because you don’t know how to solve this basic CRUD problem.
Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free, or enterprise, or maybe this is something to do with unit tests or some other garbage. I’ve even heard a stupid programmer defend this crap because the the unit tests need to be idempotent and all I can think is this fucking nerd ate a dictionary and is taking it out on me.
I mean, look: I get it, everyone is stupid about something, but to believe that this is a specific, critical problem like having to do with 503 errors instead of a systemic chronic problem that boils down to a failure to actually think really makes it hard to discuss the kinds of solutions that might actually help.
With a 503 error, the solution is “try harder” or “create extra update columns” or whatever. But we can’t try harder all the time, so there’ll always be mistakes. Is this inevitable? Can business truly not figure out when software is going to be done?
On the other hand, if we’re just too fucking stupid to program, maybe we can work on trying to protect ourselves from ourselves. Write-only-data is a massive part of my mantra, and I’m not so arrogant to pretend it’s always been that way, but I know the only reason I do it is because I deleted a shit-tonne of customer data on accident and had the insight that I’m a fucking idiot.
I agree with the general sentiment. It took me a bout 3 read throughs to parse through all the “fucks” and “stupids”. I think there’s perhaps a more positive and less hyperbolic way to frame this way.
Append only data is a good option, and basically what I ended up doing in this case. It pays to know what data is critical and what isn’t. I referenced the acts_as_paranoid and it pretty much does what you’re talking about. It makes a table append only, when you modify a record it saves an older copy of that record. Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of.
/u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.
you can do periodic data warehousing though to keep the tables as arbitrarily small as you’d like but that introduces the possibility of programmer error when doing the data warehousing. it’s an easier problem to solve than making sure every destructive write is correct in every scenario though.
Tables can get HUGE, like really huge, as in the largest tables i’ve ever heard of
I have tables with trillions of rows in them, and while I don’t use MySQL most of the time, even MySQL can cope with that.
Some people try to do indexes, or they read a blog that told them to 1NF everything, and this gets them nowhere fast, so they’ll think it’s impossible to have multi-trillion-row tables, but if we instead invert our thinking and assume we have the wrong architecture, maybe we can find a better one.
/u/kyrias pointed out that large tables have a number of downsides such as being able to perform maintenance and making backups.
And as I responded: /u/kyrias probably has the wrong architecture.
Of course, the stupid programmer responds with some kind of made up justification, like saving disk space in an era where disk is basically free
It’s not just about storage costs though. For instance at $WORK we have backups for all our databases, but if we for some reason would need to restore the biggest one from a backup it would take days where all our user-facing systems would be down, which would be catastrophic for the company.
You must have the wrong architecture:
I fill about 3.5 TB of data every day, and it absolutely would not take days to recover my backups (I have to test this periodically due to audit).
Without knowing what you’re doing I can’t say, but something I might do differently: Insert-only data means it’s trivial to replicate my data into multiple (even geographically disparate) hot-hot systems.
If you do insert-only data from multiple split brains, it’s usually possible to get hot/cold easily, with the risk of losing (perhaps only temporarily) a few minutes of data in the event of catastrophe.
Unfortunately, if you hold any EU user data, you will have to perform an actual delete if the EU user wants you to delete their stuff if you want to be compliant with their stuff. I like the idea of the persistence being an event log and then you construct views as necessary. I’ve heard that it’s possible to use this for almost everything and store an association of random-id to person, and then just delete that association when asked to in order to be compliant, but I haven’t actually looked into that carefully myself.
That’s not true. The ICO recognises there are technological reasons why “actual deletion” might not be performed (see page 4). Having a flag that blinds the business from using the data is sufficient.
Very cool. Thank you for sharing that. I was under the misconception that having someone in the company being capable of obtaining the data was sufficient to be a violation. It looks like the condition to be compliant is weaker than that.
No problem. A big part of my day is GDPR-related at the moment, so I’m unexpectedly versed with this stuff.
There’s actually a database out there that enforces the never-delete approach (together with some other very nice paradigms/features). Sadly it isn’t open source:
The spreadsheet’s functionality is powerful but can get ugly fast. I developed a whole diet tracking set of spreadsheets at one point. I would have one table of foods and then their macro nutrients breakdown. Then each daily diet would reference the foods and the amount I would eat and pull the nutrient information out and calculate the serving information. It was a bunch of trial and error to get it there and not all that pretty by the end, but served me well.
I think spreadsheets in general get ugly quickly… even excel. it’s a powerful ugly tool that works well enough.
[Comment removed by author]
I am a heavy org-mode user too, one feature I use a lot is capture templates, actually. I have one for taking meeting notes, another for recording project decisions, and another for any arbitrary notes I think are worth remembering.
The one thing I really dislike about org-mode is that I could never find a decent CLI app for it. Sometimes I just wanna query from the command line without firing up emacs. If someone has a decent utility to talk to org-mode files without org-mode, please please send it my way.
This might seem a bit obvious but the best support for org-mode is always going to be within Emacs. So why not write whatever is it that you’re after as a eshell/emacs script that you run from the command line?
I’ve alliased vi to emacsclient -t it launches an emacs server on the first call the next call will open it almost instantaneously. Perhaps it would be good enough for you.
Some suggestions to make this easier:
grep -E '^\*+ TODO' has always been good for me as my “todo” alias.org-batch-agenda-csv can be used along with pulling your org-agenda-files and agenda config into a separate file, and you can use that with other CLI tools fairly easily.They’re only textfiles, I think state changes are the hard part without emacs
I think the beginning of your last sentence is really the key for me. “They’re only text files”. I can script the hell out of those, as it turns out. Onwards!
Seriously though, one of my main cases is to have a way to accumulate notes easily. I always have an emacs instance running too. I think my problem might abstract itself away, in the end.
what kinds of queries? you can always run emacs with a one off command against a file. alias that in your shell.
see:
https://www.emacswiki.org/emacs/BatchMode
https://emacs.stackexchange.com/questions/18111/query-the-org-agenda-from-the-commandline#18119
you could also just do a simple ag or grep search as org files are all just text depending on what you want to do.
I love org-mode and consistently feel like I’ve barely scratched the surface. You can start with a really simple workflow and build it out as you see fit. Occasionally I end up in a yak shave, like trying to sync with google calendar or jira or some other part of the outside world, but yeah emacs + org-mode + (some file syncing service) has served me well.
I also use org-mode with deft.
My ~/.deft is a symlink to a folder in my iCloud drive so my notes are always sync and saved.
Furthermore I also append .gpg to the file name so my notes are encrypted with my gpg.
Org-mode is also what I use. I’ve tried many different tools, but come back to org-mode each time - it’s worth learning emacs basics just for org-mode. For example, I love how I can get a myriad of different views and reports on how I spend my time.
Here is an example clock report from a few weeks ago. (The far-right column, unlabeled, is actually a custom calculation, my estimated hourly-rate multiplied by how much time I spent on the task. I use this to estimate how “valuable” a task is. It’s not a perfect metric, but I find it’s better than just time-spent on a task.)
I try to put everything in org-mode, and the more I do so, the more organized I get and the more I get done. Like (I think) Peter Drucker said “what gets measured, gets managed.” Org-mode is my manager-self, so my engineer-self can actually get work done without worrying about which work is important.
silly question. what does this give you? all the actions in the gifs are already fully realized by the time that something happens. By the time it shows you something visual the action is fully completed.
I haven’t used this in particular, but the feature in general is super handy for yanking for instance. You’ll immediately know if you did 3yy or if you accidentally did 34398yy. Even for deleting, it gives you a quick visual hint that gives you an intuitive feel for how much you deleted.
I am not surprised, since Firefox’ current mission seems to copy Chrome, throwing away all advantages that they had (like extensibility) to become a worse Chrome. Might as well switch to Chrome then.
Throwing away XUL is necessary, being a technical dead end preventing them from doing necessary refactors for e10s and such.
I agree it hurts a lot. They need to work a lot on WebExtensions to make it viable.
XUL was an incredible piece of technology. Ten years ago I developed a cross-platform application with native look-and-feel and embedded data-visualizations in a couple of weeks. I don’t think there was anything else that would have allowed me to do that back then… and even now, that would be a challenge. I wish XUL had been blessed by W3C standardization.
Maybe it was incredible technology, but I always wished for a firefox build using native widgets. Back in the day, just wiggling my mouse back and forth over the title bar (not over the page) in firefox used to nearly max out my cpu.
There was Camino for Mac, and Galleon on Linux, but those are dead. There’s K-Meleon on Windows, but I’m unsure of its development state.
Is electrolysis worth losing a ton of extensions and developers over?
Is electrolysis even a good thing? If I open facebook, twitter and youtube at the same time I can expect firefox to grind to a halt. With electrolysis, my whole PC will grind to a halt? I don’t buy better the security argument either - firefox is a reverse shell with or without electrolysis.
E10S is required for shipping a sandbox for Firefox. I’m a bit biased, since I work on Firefox sandboxing, but I believe this is probably the single most important security project we have.
I’m not sure what you mean by “firefox is a reverse shell with or without electrolysis” - enabling the sandbox makes it so that any random memory corruption in the content process isn’t game over for security, which is a huge win.
You don’t buy the “security argument” of process isolation and sandboxing? It’s fairly easy not to see the benefits of something if you deny the reality of the benefits it does provide.
That’s my concern though - I don’t think they recognize how much people depend on their favorite extensions to make using Firefox a pleasant expeiernce. The impression I get is they’re basically gonna draw a line in the sand and switch whether or not the extension ecosystem comes with them.
I agree that it needs to happen, but were I them I’d be looking at the most popular extensions and ensuring that a transition plan exists. Their market share depends on it.
I agree that it needs to happen, but were I them I’d be looking at the most popular extensions and ensuring that a transition plan exists. Their market share depends on it.
They are doing exactly that… they have many bugs filed that are “enable to be written as webext”. The core webext team is extremely smart and capable.
That’s really fantastic to hear. I should get tuned into that effort to see if my favorite extensions are being represented :)
So, I don’t doubt it, but why exactly? I’ve heard security cited, is that something inherent to XUL itself or merely an artifact of that particular subsystem being left to wither on the vine?
Agreed. Throwing away XUIL extensions is going to totally cripple them. I don’t know what I’ll do at that point. IMO they’re what make Firefox a usable alternative, and there are a bunch of things you simply can’t do with the proposed Javascript extension standard (can’t think of the name).
It’s All Text comes to mind.
Palemoon
Maybe? No Mac support right now, which is a deal breaker for me. As I’ve posted about here before, Linux desktops have yet to come close to the accessibility features OSX provides. I’m partially blind, and ‘living’ on the Linux desktop was sheer agony.
Might as well switch to Chrome then.
Look at the linked Mozilla blog post — they already did!
The head of Firefox marketing admits to using Chrome every day, for leisure.
Yeah, I guess it has to get a lot worse before it gets better. As long as they are (so) dependent on income from advertisers (Yahoo?), they will not put the user first regarding privacy and security. For example why are the Tor Browser’s Firefox settings not the default in Firefox? Why is Privacy Badger or something similar not a default extension? Why are third party cookies still enabled by default?
I guess over time more and more stuff will break again with Firefox as also mentioned in the article, that making it a bit worse now by enabling the Tor settings and Privacy Badger won’t make much of a difference. At least you’d know Mozilla has your back, and Firefox may even gain some users by being privacy friendly by default. That is, for as long as it is relevant and “the web” is used by the average user.
Mozilla is actively working with the Tor project to upstream their Tor Browser patches and improve privacy defaults.
Ah, I missed that it was also about changing the defaults! I thought it was just to get the (code) changes upstream, but not (necessarily) the defaults. If so, that is great news!
TLDR for posterity: type esc like 4 times just to be sure, then colon then letter q (to quit without saving) or the letters qw (to quit and write/save the data) and then hit enter.
Vim mode (anti-)golf!
How about something Deep Vim like: digraph inside the expression register inside Ex inside Insert (to get here: i<Ctrl-O>gQ<Ctrl-R>=<Ctrl-K>)? From here hitting escape n times doesn’t work.
A sequence of commands that works no matter where you are in that stack of modes is something like:
<Esc> exit digraph mode (or do nothing in Ex-Mode or expression-register)<Ctrl-U><Enter> exit expression register (or execute empty command in Ex-Mode)<Ctrl-U>vi<Enter> exit Ex-Mode<Esc>:qa! exit Insert mode and quitThis sequence also works for other modes I tried, like a half-entered command inside Insert-Visual (i<Ctrl-O>vg) – here, pushing v in step 3 gets you back into Insert and then the rest will exit.
Nope. If you already in insert mode you can :q! until you blue in the face.
esc :q!
usually saves the day.
the letters qw (to quit and write/save the data)
I recently discovered that x does the same job as qw. Now I remember q for exit with no changes (or q! as @weaksauce points out), and x for exit-and-save
I use that too, but I think for new people its better for them to learn qw first so they know w is write and q is quit. Like building a vocabulary for vim commands.
I’d be super interested to hear other people’s accessibility setups!
Was one of those guys that thought that simply mapping Caps Lock -> Ctrl would be enough to save my hands from Emacs. After 10 or so years of it my pinkies started to give out. Looked into other keyboard layouts, found that many of them (such as Dvorak) actually increased stress on the pinkies because they optimized solely for travel distance. Eventually went with carpalx’s QFMLWY layout, switched to the Kinesis Advantage which has all of the modifiers under the thumb, and completely re-did all of my Emacs keybindings. Also own Kinesis foot pedals but I don’t use them anymore.
Took me about a month to adjust. Used to type 150 WPM in my prime, would say I type about 110 WPM now. Still going strong after 6 years.
Fairly extensive. One of my favorite ways of procrastinating is messing with my editor, or brainstorming ideas for new editors/editing paradigms. So before the pinky crisis happened I already had some helper functions for overriding bindings in my init.el and had done some experiments with my bindings.
The Kinesis has arrow keys conveniently located under my index and middle fingers so that freed up C-n, C-p, C-f, and C-b. I used keyboard-translate to free up C-i, C-m, C-[, and C-] which are usually unavailable because of terminal constraints. I only use Emacs in GUI mode so that wasn’t a big concern for me. The core of the remapping process involved me writing down all of my favorite commands, printing out the Kinesis layout with the QFMLWY keys, and manually mapping things such that they were convenient to reach w/o my pinkies using mnemonic sense as a tiebreaker.
Some examples: Instead of C-/ for undo which is painful for me, I use C-u. TAB (completion, snippets) is another painful pinky key so I use C-a. C-h (help) is remapped to C-i.
I also avoid binders that involve releasing modifiers, like C-x o (switch window). I don’t have that kind of precision when using the foot pedals. Anything like that gets remapped to be completely w/o modifiers or completely with. So C-x C-o is ok. I went with C-b. I don’t use the foot pedals anymore but I’ve grown fond of the rule and C-b in particular so I haven’t changed any of original remaps, but I’m no longer as vigilant about modifying new minor modes.
Have you tried spacemacs or been at all inspired by using the spacebar as a leader? the symmetrical nature of the spacebar as leader is really nice. also, vim bindings are much, much more ergonomic than the default emacs bindings.
The next step could be advertising networks that aggregate data across stores, as in “customer f5d9ad in front of screen 3, seen earlier today looking at shop window of $erotic_store for 23 seconds, walking by $liquor_store, buying $foo at $bar, …”
Or is this already happening, too?
There’s some research into this and I remember one paper about using free open store wifi which many devices connect to automatically to track where people walk when they enter a store.
Here’s an article that’s similar but not quite what I’m talking about: https://www.theguardian.com/technology/datablog/2014/jan/10/how-tracking-customers-in-store-will-soon-be-the-norm
Switching to Evil mode from vim. I’m new to emacs, so if anyone has any cool, obscure tips, please let me know!
Excited to get started with org-mode, email, and doc-view. Still viewing LaTeX’d pdfs in an external viewer.
Also, I’ve got two weeks of undergrad left, so I’m focusing on finishing my last couple assignments as well :P
If you need some help, I am in the spacemacs chat on gitter.im and they are generally friendly (can’t say the same for the emacs group on freenode when you mention anything about spacemacs.)
spacemacs is great stuff from a vimmer’s perspective and magit is phenomenal(seriously).
map jk to the evil-escape sequence and make it unordered to make it super easy and fast to escape from insert mode. just mash j and k at the same time when you are in insert mode and it escapes. here’s that code to put inside user-config inside your dotfile:
(setq evil-escape-key-sequence "jk")
(setq evil-escape-unordered-key-sequence t)
(setq evil-escape-excluded-major-modes '(dired-mode neotree-mode evil-visual-state help-mode ibuffer-mode))
(push 'visual evil-escape-excluded-states)
(push 'normal evil-escape-excluded-states)
The other one is making Q repeat the last macro you used. It’s so great that it really deserves to be in there too.
(evil-define-key '(normal) global-map (kbd "Q") (kbd "@@"))
I also find that scrolling inside the file can be better than ctrl-f or ctrl-b. I like ctrl-shift-j/k to do this but only half a page down at a time:
;; scroll a half page at a time
(evil-define-key '(normal) global-map (kbd "C-S-J") '(lambda () (interactive) (evil-scroll-down 0)))
(evil-define-key '(normal) global-map (kbd "C-S-K") '(lambda () (interactive) (evil-scroll-up 0)))
There are others but those are my favorites.
What I didn’t like about Vim was that a good number of plugins always felt like a hack…. Another popular plugin is Syntastic. I absolutely hate the way it displays the errors. It is not a smooth transition from “no error” to “error”. Not much to say here, it just looks awful, yet it is highly recommended.
Dedicated Vim user for about 8 years here. I feel this pain. My setup feels hacky.
Yet I’m not willing to lose the power of modal editing + command line integration (see my post, “Making Vim, Unix and Ruby sing harmony”, and I doubt any emulation will satisfy me.
Kakoune is the first editor I’ve seen that looks like it might be my future tool of choice; the author is like “I see why modal editing and Unix integration are great and I intend to do them better than Vim.” If the usability is also better, I’m sold. I’m just waiting to find the time and an easy enough onramp to start learning it.
I second the Kakoune recommendation. It lacks plugins but is very promising. I missed CtrlP for example when I tried it.
micro is more nano like but it’s still a pretty good editor.
Have you tried spacemacs out? it’s really a much better at customizing the editor than vim but still has a very well done version of vim emulation. I consider myself to be a power user of vim and spacemacs is very compelling if you don’t want the hacky feel. magit is a phenomenal package(shows just what a decent programming language inside the editor gets you) and changes the way I look at version control.
I tried Kakoune for a week around New Year’s. And it was surprising how deep I got into it. In the end, though, I found myself deeply incompatible with it.
In spite of being modal, Kakoune’s shortcuts started to feel like Emacs. Default chords like alt-C would require pressing three keys at once.
Kakoune’s scripting language started to feel just as hacky as Vim, albeit in different ways. More minimal, but a lot more nested-string escaping. If I was willing to put up with chords, why not just use Emacs and get a real language to script my editor with?
Critical keyboard shortcuts had no equivalent. { and } for navigating by paragraph. , was impossible to make work as I expected. X for deleting a character backwards. You can’t just hit w to skip ahead one word and then hit i to start typing. You have to hit w, then ; to reset the selection before you can start inserting. Just felt weird.
Kakoune has no window management. Instead you’re supposed to just use tmux. That felt nice and orthogonal in theory. In the spirit of Unix. But in practice it turned out to be not so nice. For example, I could open multiple windows in tmux panes, but Kakoune would always be in the directory I first started it in, rather than the directory I opened the current window at. There was no sequence of autocommands and scripting that would allow windows to have their own directories like Vim has. I think the same idea may apply to Rob Pike’s criticism of the proliferation of commandline flags in Unix: it’s easy to criticize Unix on principle, but when you get into the details it is perhaps not so bad.
Perhaps I was just too programmed by Vim. I don’t know. Interesting, mind-broadening experience for sure. The lesson I was left with was something fundamental about software: we still don’t know how to unbundle the timeless essence of a piece of software (in the case of Kakoune, the object-verb grammar and multiple cursors) from the slow accretion of more arbitrary design choices like how you navigate by paragraph or the implementation of f and ;. Back in the git history of Kakoune is a sweet spot with a more compatible and elegant text editor. It may have missed some features, but it didn’t have conflicting features to a human being on Earth used to Vim.
I had this exact same experience. Initially I went to emacs (and I still love bits of emacs - org-mode is outstanding) but for day to day editing I’ve transitioned to Visual Studio Code.
It’s very featureful and its extension language is Javascript, which I’m finding much easier to wrap my head around.
I love kakoune too, I think it’s heading in a very interesting direction. The always-on “visual mode” is great, and the use of standard UNIX tools to extend the editor makes a lot of sense. I implemented a simple “netrw”-style extension with little effort and calls to ls.
One thing that bothers me though is the lack of a simple C key to replace until the end of the line, as in Vim. I use this very often, and maybe I just missed something, but it’s just not that quick and easy in kakoune, it would require a custom mapping or something, I believe.
inoremap jk <esc>
inoremap kj <esc>
home row escape key that is faster than jj (which i used for a long time) as you can just mash both keys at the same time and it will get you out of insert mode. Has the benefit of being a nop in normal mode too.
well fd and df are both default commands(find the letter d and delete find next character in normal mode) so I don’t have them bound to anything other than the normal find and delete in normal and in insert mode they are just insert f and d separately.
I’ve worked with people using Macs for film work and it’s actually been kind of sad to see them withering on the vine without any proper REALLY high-end machines from Apple for the last 4 years.
If I was managing this one, I’d try to make:
Then I’d release a new one every year or 6 mo, whatever schedule, with just tiny improvements to the case design, mostly all the same mechanical parts and updated CPU/GPU/motherboard. Keep making up new adaptors to make the water cooling system fit new chips and cards and you could keep the same basic design going for a decade.
My reasoning is that the target market for a Mac Pro is people like OS X but have an unlimited demand for computing power, and would otherwise be looking at moving to a Hackintosh right now.
tl;dr I’d make a giant hackintosh with official support
Other than the case being heavy compared to a normal PC, the aluminum body with grill front mac pro was a great machine with a very nicely made design. not sure why they thought that smaller is better for that form. it’s not like people typically move these machines very often.
Have you looked at the HP Z series? I use some of the older Z620 machines, they’re quite nice. I’m not a huge fan of water cooling, heat pipes to radiators seem like a more reliable system.
Pentadactyl cannot be implemented on top of either Chrome or the new useless FF addon API.
https://bugzilla.mozilla.org/show_bug.cgi?id=1215061 is being worked on. mozilla wants to make the extensions work but not give unfettered access to the browser so that they have to be cognizant of changes breaking extensions (and by extension, make firefox look bad) and remove the memory leaks and slowdowns caused by bad extensions. They still want extensions like pentadactyl to work and are increasing the surface area of the webextension api.
This is exactly my reason too–Firefox (or more specifically the Mozilla platform) was built on the principles of internal reprogrammability and giving the end user the same power over the program as the core developers. I don’t want to be stuck with a second-class API that can only do what the core devs think outsiders will use but isn’t good enough for them to use themselves.
The hope lies in qutebrowser, with the QtWebEngine backend it’s already decent (but not quite there).
My understanding about that browser was that the underlying rendering engine was not receiving security updates; is that no longer the case?
edit: “Next, we have to do something about QtWebKit, the other big WebKit port for Linux, which stopped receiving security updates in 2013 after the Qt developers decided to abandon the project.” doesn’t look promising https://blogs.gnome.org/mcatanzaro/2017/02/08/an-update-on-webkit-security-updates/
Qt, in it’s entire tree of libraries, includes both QtWebKit and QtWebEngine. QtWebKit is outdated and was basically code import and then never touched again, and is based on WebKit code from long ago. (IIRC, it was from even before Chrome forked off into Blink too. So very, very old.) QtWebEngine is based on Chromimum, uses libv8 and is generally more maintained.
qutebrowser is a web browser chrome/shell wrapped around originally QtWebKit. Over the past year, the author ported it to QtWebEngine. If you want a very vim-like experience and UI, I’d highly recommend it!
Interesting; they should update the web site, which still says it uses QtWebKit.
Unfortunately neither engine receives security updates on Debian, but users on other OSes might have better luck.
Edit: all the installation documentation on the web site and on github tells you how to use QtWebKit. I have a very difficult time trusting a project which encourages its users to install software which has hundreds of known security vulnerabilities; this is grossly irresponsible and can’t be excused by having an undocumented branch or flag somewhere that remediates the problem.
I personally use zenburn (Screenshot)
I don’t understand the popularity of Zenburn; it is really hard to see for me. Does anyone else experience this?
I have a strong preference for zenburn, but in spite of regularly searching for replacement themes, my eyes seem to find zenburn the most pleasant. I have not been able to determine why.
I would also be interested if anyone were to come up with a plausible hypothesis. (at least solarized has some sort of perceptual story to it :)
zenburn is pretty nice, though, somewhat muted. Monokai is a clean and readable theme if you like it a little bit less muted.
That’s the whole point of themes. Everybody can get what they want/need. Zenburn is totally unusable for me - I need a super high contrast theme.
I like the Deeper Blue Theme which I find to have good contrast and high readability for my crappy vision :)
After some years I found myself preferring light themes and at the moment I’m settled with spacemacs-light (which works even if you don’t use spacemacs).
I really like it, but agree that it’s too washed out — so I just shift the background colours down a notch to a darker version with more contrast. Specifically, in Emacs:
(use-package zenburn-theme
:init
(defvar zenburn-override-colors-alist
'(("zenburn-bg-2" . "#000000")
("zenburn-bg-1" . "#101010")
("zenburn-bg-05" . "#282828")
("zenburn-bg" . "#2F2F2F")
("zenburn-bg+05" . "#383838")
("zenburn-bg+1" . "#3F3F3F")
("zenburn-bg+2" . "#4F4F4F")
("zenburn-bg+3" . "#5F5F5F")))
:config
(load-theme 'zenburn t))
This is one of the most compelling features of magit, and what made me actually use it over the git cli.
M-x magit-status launches a window that contains all your staged and unstaged code, and you can highlight individual lines of the diff to remove or add them to your staged changes. It also includes a stash UI as well.
God yes, that and +/- to expand/decrease where the hunks start (be careful getting it too close to one line). s/u to stage/unstage things (can even do this if you highlight stashed hunk as well, soooo nice).
It really is the best git ui i’ve used.
Funny you should mention magit. So, I’m learning Common Lisp, and I was using Atom + Slime. Then I got really annoyed by Atom (I’m sure I could have reconfigured it given some study, but the Slime integration started to get wonky too). I took the plunge and have started using emacs (which isn’t that bad, once I figure out what M-x meant :P) Then I read a thread on HN about altassian buying trello (comments are hillarious) but ran into magit, and learning about magit I learned about hunks which seems to me one of those things about git that should be advertised more prominently.
Or use @andrewshadura’s git-crecord.
I do this workflow all the time using mercurial, trying to keep my commits as atomic as possible.
I remember the first time I needed to stage split chunks of a file and thought “I wonder if I can just highlight these lines and hit s… I can!” Bravo magit devs.
Whenever I read tech articles about reducing keystrokes I tend to roll my eyes.
cd‘ing directories already takes up a very small portion of my time—optimization will never be worth it. Now if you can tell me how to make roadmap estimations that don’t put my team in peril, now that’s going to help me to not waste my time!Edit: It’s a cool tool, just maybe the article is touting it as more of a life saver than it actually is.
I mean, I do too, but people do actually take this kind of thing seriously. I’ve had several people say they wouldn’t use
ripgrepbecause the command was too long to type, but upon hearing that the actual command wasrg, were much more satisfied. Maybe I missed their facetiousness, but they didn’t appear to be joking…Could they not have just alias’d the command if it was “too long”?
The people in question don’t sound clever enough for that.
Are you asking me? Or them? ;-)
I wonder if these are different people than the ones who complain about short unix command names and C function names…
For those of us with RSI, these little savings add up, and can make for a pretty big difference in comfort while typing.
Oh please. If you’re really worried about a couple of words and keystroke saving, you’d setup directories and make aliases that will take you specifically where you want to go. Assuming it was even a GUI you were using with a mouse, you’d still have to click through all the folders.
Overall, paying close attention to your workspace setting and ergonomics can go a long way in helping improve your RSI situation than this little jumper will ever do
My thoughts exactly. I have often wasted time trying to optimize something which took so little time to begin with, even if I reduced the time to nothing it would have no significant impact on overall performance. And the less-obvious trap is optimizations like this add additional complexity which leads to more time spent down the road.
All right, buddy. Cool.
Did I say it a “life saver”? Nope. Did I say it could save you a lot time? Yup. If
cd'inginto directories doesn’t waste your time, cool. Move along, read the next blog post on the list.I’m sorry about your roadmap estimations. Sounds like you’ve got a lot on your chest there.
Let me just take a step back and apologize—nobody likes negative comments on their work and I chose my words poorly and was insensitive. I’m rather burnt out and, in turn, that makes me appear more gruff online. I’m positive that someone will find this useful, especially if they’re managing multiple projects or similar use cases.
I really appreciate you saying that. The whole point of this piece was to share something that literally makes me whistle to myself with joy every time I use it. I hope you find some time to take care of your burn out. It’s no joke and I’ve suffered from it quite a bit in the past three years myself. <3
I know it’s easy to look at everything as “this is just like X but not quite the way I like it” and I don’t blame you for having that reaction (like many here). AutoJump is to me the epitome of simple, delightful software that does something very simple in a humble way. I wish I had spent more time extolling the virtues of the simple weighted list of directories AutoJump stores in a text file and that ridiculously simple Bash implementation.
The focus on characters saved was a last minute addition to quantity the claim in the title. Which I still think will be beneficial to anyone who remotely has frustrations about using
cdoften and may suspect there is a better way.If only there was a way to optimize crank posting. So many keystrokes to complain!
the parent tool is probably overkill but a simple zsh function to jump to marked projects with tab completion is pretty awesome to have.
I’ve tried this, but I keep end up making shortcuts and forgetting about them because I never train myself well enough to use them until they’re muscle memory.
I think I’ll just stick to ‘cd’ and also extensive use of ctrl-r (preferably with fzf)
And then you go to a work mates computer, or su/sudo/SSH and it’s unusable :)
well this is one of the most useful shortcuts in my arsenal. type
j <tab>orjump <tab>and it completes all the marked directories. If you get over the initial forget to use it curve it’s amazing and simple (just a folder in your home dir with a bunch of symlinks. and a few helpers to create those.)