High level : I recommend starting with the journaling and (self dialog) habit e.g. “it would be nice to do xxx by feb”. If you have it searchable you can easily reference your ideas.
Qualitative ones: google doc or google keep note. Quantitative Ones: Spreadsheet (e.g. financial goals, tax goals) Reminders: followupthen.com e.g. you can send email to 90days@followupthen.com.
disclaimer: I’m very sloppy at this.
I have read about https://www.monicahq.com/ as an example. Never tried it. Have you tried it?
Personally I find the concept a bit … autistic/creepy, but still have considered it as possibly useful tool.
If you think it might be useful, but found a dedicated CRM app a bit much, have you tried using the notes field in your phone’s address book? I use it to jot down names of kids & spouses and things like “vegan”, “teetotal”, “pronounced […]” etc. They’re synced everywhere automatically and they’re searchable in a hurry from your phone.
I think it may seem creepy because of associations with corporations and marketing.
However, when I actually think about it… Would my life be richer and better if I was more consistent about staying in touch with people? Almost certainly!
I tried this but had difficulty getting the self hosted version to work. As far as creepy, I think of it as just a memory extension. It isn’t anything someone with a good memory couldn’t do, just helps mortals to remember birthdays, peoples’ interests, etc.
I found this one a while ago: https://www.monicahq.com/ (not affiliated)
It needs a lot more automation to become useful IMO.
A mostly text-based shell Interface to my computer, which is not stuck in the last century: https://matklad.github.io/2019/11/16/a-better-shell.html
Interesting things happen with arcan-tui and userland. Powershell and powershell-a-likes are not the answer.
Pretty much agree with your post. Removing the distinction between shell and terminal emulator would allow new and interesting modes of operation. One of them could be pausable and introspectable pipes. Another one could be remote SSH sessions that have access to the same tools as the local one.
First paragraph of the post explains that I am not looking for powershell. It indeed is a big improvement over bash, but in areas I personally don’t care about.
If you read the post this isn’t what the OP is going for. Powershell brings some excellent new capabilities to the table with object pipelines, and has some nice new ideas around things like cmdlets and extensability, but his post goes into much more detail about user experience aspects Powershell doesn’t even come close to providing.
Why cargo test blocks my input? Why can’t I type cargo test, Enter, exa -l, Enter and have this program to automatically create the split?
What I really want is an extensible application container, a-la Emacs or Eclipse, but focused for a shell use-case.
I would like Oil to be able to support this kind of thing, and at least in theory it’s one of the most promising options.
And ironically because I’m “cutting” the interactive shell, it will should be more possible than with bash or other shells, because we’re forced to provide an API rather than writing it ourselves.
I had a discussion with a few people about that, including on lobste.rs and HN. The API isn’t very close now, but I think Oil is the best option. It can be completely decoupled from a terminal, and only run child processes in a terminal, whereas most shells can only run in a terminal for interactive mode.
Related comment in this thread: https://lobste.rs/s/8aiw6g/what_software_do_you_dream_about_do_not#c_fpmlmo
Basically a new “application container” is very complementary to Oil. It’s not part of the project, but both projects would need each other. bash likely doesn’t have the hooks for it. (Oil doesn’t either yet, but it’s has a modular codebase like LLVM, where parts can be reused for different purposes. In particular the parser has to be reused for history and completion.)
Amusingly, using :terminal
in neovim
changed a lot of things for me. I could then go to normal mode and go select text further up in the ‘terminal’. Awesome!
Yeah, this speaks to some of the power he references in his post that emacs brings to the table. IMO one of the things that makes neovim so impressive is that it takes the vim model but adds emacs class process control.
I’d love it if people would do more with the front end / back end capabilities neovim offers, beyond just using it for IDE integrations and the like.
I like his suggestion to make syntax-highlighting modal with different dimensions. e.g. ctrl-alt-1 , 2 , etc could be assigned to block-level, brace-level, symbol-level highlighting.
Many of these managed services like S3 are black (or at least grey) boxes. The edge case SLAs can be unpredictable and when you experience issues like multi-second latencies, it’s impossible to debug.
Running your own service means you can strace, gdb, top or even add printf() statements to debug. The power is in your hands.
DISCLAIMER: I work for AWS Elastic Filesystem.
Running your own service means you can strace, gdb, top or even add printf() statements to debug. The power is in your hands.
I’d argue that if you’re running strace or gdb on your database instance unless you’re breathing some VERY rarefied air and doing some incredibly specific work where you’re using the database in an incredibly specialized way, this should be seen as an antipattern.
And, if you are in fact in that 10% who have a valid need to do this, you probably wouldn’t even consider managed services anyway.
They’re for people who want to treat infrastructure like LEGO. There are some valid reasons to hate this conceptual model, but I claim that the industry has produced enough counter-examples to at least put up a good fight.
I think this downplays significantly how broken computers are on a fundamental level.
It’s also not a nice sentiment to have when you’re the person who has the control and I, the user, do not.
There’s a simple fact that systems will fail in loud, quiet and interesting ways. When you rent a managed service it’s essentially the same thing as buying a support license from a vendor for a black box unit that sits in your datacenter.
Sometimes the support staff have little incentive to help, sometimes your unit will fail intermittently and be difficult to have support staff on hand during the issue- and, often, the support price will go up as the product gets older. Meaning you have to rebuild your applications to use version 2 of whatever product it is.
My situation is very simple: I’m responsible.
I cannot outsource that responsibility, I can mitigate the risks, but if everything fails my company will come to me first; it was my choice to use a managed service and I have the responsibility if it fails.
Managed services are great though! In theory there are many operations staff working on maintaining them so that I don’t have to. But the flip side is that I get a cookie cutter variant of something with no control if it goes down, and developers are constantly pushing changes to the service without my knowledge which can impact things (for the better or worse).
You’re quite lucky to be working for the dominant cloud provider too by the way, because when you have issues your customers are not going to be beating down your door too much. “If Amazon goes down, half the internet goes down too, so we’re not too worried” is a sentiment I’ve heard developers mention often.
The Google Cloud and Azure guys don’t have this luxury; and you’ll note that it’s never the CTO of a >500 person org who is making this claim either.
You’re quite lucky to be working for the dominant cloud provider too by the way, because when you have issues your customers are not going to be beating down your door too much. “If Amazon goes down, half the internet goes down too, so we’re not too worried” is a sentiment I’ve heard developers mention often.
I am incredibly lucky across a number of axes. I love working here. For me it’s an unbeatable combination - great challenges, great people, great culture. I am also incredibly lucky to be on a great team with a phenomenal manager.
I think there are two good points here.
For one, your DB daemon is just a process. It has a stack and a heap. There’s no reason to be intimidated doing traditional debugging–gdb, strace–to identify latency issues.
Your other example about EFS is an even more accessible candidate for debugging. Let’s say I was experiencing 500ms write latencies on EFS. I would have to go…RTFM and hope there’s some 99%ile case I’m missing. On my own NFS server i would jump over there and strace & gdb the daemon to see where the time was spent.
I’m guessing both cases would waste my time just as much, but the 2nd case would empower me with both the understanding of the problem and the tools to get to the solution (patching the daemon and sharing the patch upstream).
The only antipattern I see here is telling devs that they are not sophisticated enough to gdb a daemon. There’s no magic in software. They are deterministic artifacts with stacks and heaps – differing from “hello world” only in size but not in nature.
And just in case this sounds hypothetical, it’s not. I’ve had real world experience on multi-million dollar products where the cloud DB just stopped , or disappeared, or dropped a table unexpectedly. Our only recourse was a phone call, which usually involved a support upgrade, and a long queue waiting for the solution to be found. In one case this was a DB with a > $50k / mo license fee that just disappeared.
So the black-box issue on cloud is real. And cloud marketing is working to cover it up.
To be clear, I did not cite EFS as an example, explicitly because what we offer is a black box, and has to be given the proprietary nature of the service.
that’s a part I’d like to see improved. I’m also a big cloud fan and heavy user. But the move toward more closed services is a bad trend for developers.
With some investment in tracing & debugging, improvements can be made.
My only hard-line stance is against “managed is always better” – there are enormous costs to using a managed service .
My only hard-line stance is against “managed is always better” – there are enormous costs to using a managed service .
Anyone who takes this stance is straight up ignorant.
There are all kinds of reasons why managed services might not make sense for a particular use case. Total control and debuggability is but one of them.
There are regulatory reasons, performance reasons, customizability reasons, and that’s just off the top of my head.
Anyone who takes this stance is straight up ignorant.
I think you two are in agreement though. Neither of you wants people to assume managed services are a silver bullet to every problem.
I used to work on a pretty big website. Logging was 80% of the storage cost because some devs do not understand that stack traces do not belong to production. Many of the logs could have been a metric in the monitoring system. Logging was somehow a holy cow and the mindset was that it is better to have more logs than we need instead of missing something. Half of the logs were repeated meaningless giant string that meant nothing to anybody. Fun times.
A process suggestion
raw notes → personal briefs → shared posts
Raw notes (e.g. evernote , google keep)– these should be fast and consistent across devices. Support hashtag and keyword search so you can easily consolidate things. Examples would be periodic journal entries, list of favorite albums. business ideas. You should be able to write a note within 2 seconds
Personal Briefs (e.g. in Evernote, Quip, Google Docs)Strategic 1-pagers catered to deeper thinking on a topic. Need to support linking, keyword search. Example tools are Quip, Google Docs. An example brief might be “survey of frontend frameworks” with pros & cons.
Shared Posts ( hosted markdown → HTML)Larger audience. Hosted on your blog or someone else’s if that’s your thing. Expand and clarify your personal brief’s with context, links and better language to appeal to a larger audience.
I tried several things so far. It looks like it’s a lot of it because I tried all this over a span of 15-20 years or so.
I’ve been using the last thing for about 3 years now and I don’t see myself switching any time soon.
It’s an uncommon setup but I ended up with it because:
Edit: ah, worth pointing out. A lot of this is obtained by distilling notes that I take on paper, but a lot of that doesn’t lend itself easily to wikifying, so I have a bunch of old-fashioned folders around. These are mostly on non-technical subjects, but there’s a bunch of tech stuff in there, too. However, I do go through them periodically and sometimes throw away some of the stuff that I definitely suspect I won’t care about, not even for nostalgia.
didiwiki → tiddlywiki https://tiddlywiki.com/ – i used this for a while as a precursor to evernote
I have almost finished building a static version of gitit that basically doesn’t require caring about having Haskell on your system. The binary is nearly 200 MB but then you don’t need to worry about having GHC in whatever environment you launch it.
That would be the second best thing after sliced bread, topped only by tattie scones!
Edit: actually I think in the revised hierarchy of things, the correct term is “the best thing after tattie scones”
Yes! I am genuinely curious still why so many people tried gitit back in 2012 and then abandoned it. Including its own author. It compares very well even in 2020 with most wiki software and has way less lock-in.
The Haskell thing was the biggest problem for me, I think. It had a lot of dependencies. Some versions of said dependencies were mutually incompatible. Some of them weren’t packaged by every distro and I had to install them by hand. Plus, at the end of the day, I had like half a gigabyte of Haskell-related stuff that I never used for anything (I played with Haskell at some point in the mid noughties but we just didn’t get along…) and filled my screen (and bandwidth) every time I ran pacman.
(Edit: for anyone reading, please remember this was almost ten years ago. There are a lot of smart people in the Haskell community, I expect some things are better nowadays)
Then some of the dependencies (or gitit itself? I don’t remember) became unmaintained, too, and it became pretty clear that there’s a good chance none of this will be around in another ten years or so. I figured Mediawiki would be around for as long as Wikipedia will be around so that might be a better choice.
Update:
So I found out that you can use environmental variables to change where gitit looks for static files.
https://github.com/JustusAdam/gitit/blob/master/unix-proxy.sh
So using that script combined with adding -static flags to the cabal file made it possible to deploy a binary tarball to friends that seems to work at least on x86-64 linux. If you are still interested I can PM you this tarball.
https://www.reddit.com/r/haskell/comments/9on8gi/is_there_an_easy_way_to_compile_static_binaries/
It’s a little quirky and has a bunch of hardcoded things that probably make it useless for others, but I’ll see if I can find some time to clean it up.
That being said, it’s a completely trivial, 200-line CGI script that basically:
It probably takes less time to write one from scratch than to figure out how to use my 200 lines of poor-taste, uncomented Python.
I understand this is not a work of art. My interest was piqued because we seem to have investigated similar alternatives including Gitit
, and I thought it might be interesting to see what you came up with.
It’s dangerous to assume that the amount of complexity in a given solution is fixed. Much of the complexity stems from the approach.
I think a good practical example of this was cvs → svn → git. Svn assumed that the complexity was inherent in SCM systems, and retained most of the complexity from CVS (while patching some of CVS’ flaws). Git redesigned the data model, and removed much of the complexity from the system. Git was able to retain the consistency , while making almost every operation faster, because a better data model reduced overall complexity.
Right, there is no upper bound to complexity, but this is about the lower bound (that there exists one).
The prediction is that oversimplifying leads to a system that can’t deal with the real world, or forces the user to invent the rest. I think your SCM example shows that very well: While Git isn’t exactly a good study in userfriendliness (oh boy), its datamodel fits much better with workflows (distributed) and actions (merging) where Cvs and Svn were essentially useless.
I still don’t understand why people write rest APIs . They should be a code-gen
Can you expand on this sentiment?
Most probably, the parent comment relates to using some form of OpenAPI spec which can be done using the oapi-codegen for golang.
Oh ok, so kind of like gRPC.
Maybe they meant something like Apache Thrift and similar tools.