This is a really clever idea. Being able to set an optical switch without continuous power is something I hadn’t seen before. I didn’t follow the all device level details, but then, I worked on a photonics project for five years and I never really did. :)
I skimmed this to see what the loss numbers for these devices would be - that’s what we usually focused on when trying to design system architectures that would be competitive with electronic networks.
Looking back at the last paper I was involved in about this 1, it seems like we needed basically every device along a waveguide to have a loss of less than 0.75dB, and for the number of devices along a path to be ten or fewer (IIRC), for a switched optical network to be competitive. It looks like each of these programmable switches has ~2dB loss, so you’d blow your budget pretty quickly.
Also as an aside, I’m still surprised to read new papers about this kind of stuff that completely omits the work our collaborators were doing (and publishing) at Sun & Oracle. It’s just weird to me. I don’t really recognize these authors but I met many of the people they did cite, and they certainly knew our people.
We currently use CVS at $job for a large amount of our projects, mostly because that’s what had been in use here for some 30 years. It’s great for our use-case (one branch, many developers), but we’ve been migrating projects one-by-one to git very slowly.
Git feels much more powerful, but also more complex for basic operations like merging and rebasing. Our integration/workflow with Eclipse could be hindering some of this, but it’s almost second nature to use CVS in the IDE.
I do use git quite a bit for personal projects, but mostly just through the terminal and with minimal merging/branching.
magit under emacs is really, really good if you’re jonesing for proper editor integration with your revision control system.
I used CVS for a few things back in the day, and SVN more - but I can’t remember either well enough to know how you’d even handle things like rebasing - to me that’s a term that almost doesn’t make sense outside of modern VCS. So when you said it’s more complex, I was surprised. I do ‘git merge’ and ‘git rebase’ regularly, it’s an everyday part of the workflow. Rebasing in particular makes keeping long-lived branches alive in a state where they can be cleanly merged, a much more sane proposition.
+1 for magit in emacs also, btw. It’s a power tool for git.
Cheap branching and local commits are the biggest selling point for git’s usability over CVS/SVN, because I can be so much more confident that I’m not going to lose work. I can save commits, use temporary local branches, and never worry about accidentally destroying my local changes while doing a merge.
I use emacs’ org mode to do something kind of like GTD. Org Mode suffers from the same problem as much of emacs, that it’s so flexible and customizable that you can do anything you want, and so everyone has done something kind of different, and if you want real workflow tools you’re generally stuck spending a lot of time editing customization or writing your own. It’s also a real problem for good mobile support, because no mobile client will do everything that the full org-mode supports, so the audience for a particular set of mobile functionality is small. There’s a decent stab at a modern iOS org-mode app that syncs with dropbox now - Beorg - but I fear for its longevity.
The main things that are important in the system for me, is an inbox I can add to from anywhere, and a way to schedule TODOs in the future and repeating. Finally, support for a ‘review’ step is really important to make sure you’ve actually looked at everything recently. I use a bunch of fiddly custom org “Agenda views” for this.
I’d really love to have something that did what I need without being the endless pile of yak fur that is org mode and emacs, but I haven’t found it yet.
I can’t log in to Zulip. Logging in with GitHub just loads the login page again. I reset my password (I never set one in the first place), which appeared to succeed, but logging in with my email/password just loads the login page again, just like with the GitHub login. 🤷♀️
FWIW, I had this same problem with Safari 11.1, and tried Chrome (65.0.something) and it worked fine.
This is a fun question.
I feel like what you describe as “a single-ish critical path of information flow” sounds interesting, but it isn’t really how I usually think of a call stack. If we think of a call stack visualization as showing us (the control-flow part of) the state of a program at a particular time, we could just extend that metaphor to other layers of the system. This would get so big, it’d be great to see it as an interactive zoomable thing.
Unfortunately I don’t know of off the shelf tools for exactly this. I think @yumaikas’ suggestion to look at virtual machines is a good start, since I assume there’s a way to inspect their state (haven’t done that myself) - you could also look at tools like valgrind for collecting lots of process-and-thread-level detail, and once you want to collect info on the hardware, run things in a full-system architectural simulator like Gem5. It’s real slow but you can look at everything in a very realistic system that runs real software.
I’m imagining a layered visualization with the following rough layers. Each level down would try to connect to the level above if possible without being too confusing, with lines showing e.g. how a virtual address in a text segment in the heap gets mapped down through memory and caches into the program counter in the processor.
Then at the next level you see some kernel structures, and I think you could get at some of this with perf tools.
Now the hardware, which you’d need to instrument an architectural simulator to get:
As for anything lower level than this, I think you’d have to do some editorializing because of the scale of things. You could say that you’re on a RISC-V core, so you can actually get access to the hardware description, and just show e.g. one adder in one stage of the pipeline that’s executing our instruction, and just fake its contents for the visualization. I’m not really familiar with tools at this level, maybe there’s a better way, I’d be curious to hear.
After looking at the front page of the gem5 wiki you linked to, I decided to do a web search for gem5+screenshots and I got this result:
…I think that GUI is an addon call streamline? I just scrolled though looking at the pictures. :)
This would get so big, it’d be great to see it as an interactive zoomable thing
Yeah I imagine it to be huge. My ballpark guess is that for a piece of desktop software modelling from ‘User’ or ‘Application’ abstraction all the way down through assembly code on the CPU to e.g. firmware running in a NIC or keyboard controller there are 500 layers of software abstraction if you could somehow find an example of a ‘longest path’. (This is using the definition of each layer of abstraction roughly corresponding to e.g. a ‘class’ in OO code or function / module in non-OO, the point being abstractions rather than LoC)
I was hoping to get pieces of the whole thing from people who might specialise in those parts and manually stitch them together.
It sounds like you’re going into visualising the actual content of each layer of abstraction? That sounds like a huge challenge, but would be fascinating; I was just going for getting a single long list of labels for each layer (i.e. class/function name) ;-)
I think the idea of a camera that gets great shots automatically is really appealing. It’d be great for getting pictures of kids on Christmas morning, for example.
However, I was ready for this to be always connected, and require an account, being a total privacy nightmare. I’m pleasantly surprised that they seem to be taking this really seriously, though - you don’t need a connection or an account to use it - processing happens on the device - and it doesn’t auto-upload.
I’m still inclined to mistrust it, but I admit it has me asking myself what more they could do to make it palatable for privacy.
Nice list - As a former compiler & optimization person, I came looking for Frances Allen and was not disappointed.
I have a couple suggestions that I know from my old areas of study:
Jeanne Ferrante (UCSD): “ACM Sigplan Programming Languages Achievement Award for the development of Static Single Assignment (SSA) form (with Ron Cytron, Barry Rosen, Mark Wegman and Ken Zadeck). SSA is a program representation that yields faster, more compact and powerful program optimizations, and the award recognizes SSA as a “significant and lasting contribution to the field of programming languages”.”
Susan Eggers (UW): “With her colleague Hank Levy and their students, she developed the first commercially viable multithreaded architecture, Simultaneous Multithreading, adopted by Intel (as Hyperthreading), IBM, Sun and others and the winner of the 2010 ISCA ``test-of-time’’ award.”
And a couple who are maybe less decorated, but I thought might be worth pointing out:
Sally McKee (bio ) Sally McKee co-wrote the 1994 article in “Computer Architecture News” titled “Hitting the Memory Wall: Implications of the Obvious” that was possibly the first time many computer architecture researchers really thought about what happens when CPUs keep getting faster than memory. In 2004 she wrote a retrospective: “Reflections on the Memory Wall” (it contains the full text of the original). She’s still working in architecture & performance.
kc claffy (UCSD/SDSC) (bio ) She and CAIDA had been doing global network measurement and analysis for years when I showed up at SDSC and was wowed by the kind of visualizations of the entire Net they’d hung up around the place. I don’t have enough context to know how far ahead they were, but it was definitely impressive.
From TOPLAS 2011. More info here: http://www.cs.umd.edu/projects/PL/locksmith/index.html
Does anyone who’s more familiar with static analysis have the time to give context? I was surprised it was 6 years old, because I had thought this kind of analysis on C was really Hard. I was ready to believe it was a significant new result.
Has this had a lot of impact? They mention hoping that it would help other analyses, in their conclusion.
I don’t know enough about the general state of race detection to know if the technique in this paper is particularly efficient or effective, but static analysis tools for detecting race conditions in C and C++ have been commercially available for a while. Coverity has been able to do it to some extent for 10 years. Intel also has a tool.
It always surprises me how far behind the open source solutions are when it comes to static analysis on C and C++. Maybe there’s so much money to be made selling analysis tools that nobody wants to release their code? Every job I’ve had writing C++ has used some kind of analysis tool, but we always have to pay for it because the OSS solutions are so far behind. I feel it’s getting better now that Clang is around, though.
Hearing stories from folks doing academic HPC, the answer is you can’t. Even with source code, there are often a bajillion stupid scripts, vendor-specific compiler fuckups, scheduling issues, hardware failures, silent data corruption, and everything else. And that source code, the grad who wrote it is long gone and it’s probably impenetrable templated C++ with some Boost and MPI in there somewhere. Maybe some badly-ported Fortran code–if it isn’t linked directly. If they’re in a really progressive lab, they might have git. Otherwise, it’s subversion and divergent branches and general clusterfuckery.
Oh, and some of them, when asked about this, will snarf and talk about how they are special academic HPC snowflakes that somehow decades of software engineering practice don’t apply to. “We’re too large to log errors properly, debugging can’t be done like you’re suggesting, everything takes too long, we are very smart PhDs, etc.”
As long as the paper gets published, though, it’s fine. How many people have the machine hours to prove you wrong? And at what risk to their own equally-shaky research? Most people can’t even understand the abstracts of the papers they work on–who, exactly, is going to call bullshit?
Having done academic HPC, I can’t really disagree with anything you’ve said, at least not as of 7-8 years ago. I can only hope it’s gotten better. Also, there are some fields in which the potential impact of results makes it easier to get resources for professional software engineering practices, independent reproductions, and validation runs. In my experience, weather, climate, and computational fluid dynamics can get those resources.
If anyone’s interested in a readable overview of what the practice of software engineering in computational science looked like as of ‘08, this article by Victor Basili et al: Understanding the High-Performance Computing Community: A Software Engineer’s Perspective is a good start - I worked with all of those authors, and they’re all great. In particular Jeff Carver seems to be continuing this line of work with regular workshops on “Software Engineering for High Performance Computing…” at the Supercomputing and ICSE annual conferences.
If you’re wondering how seriously the scientific software quality problem is/was taken, much of that work on understanding HPC development processes was funded by part of a DARPA program for “High Productivity” computing systems, and the software engineering parts of it were de-funded unexpectedly in the last round of that program… The public face of that effort was at highproductivity.org (wayback link), but now it’s so dead I couldn’t even find enough references to its workshops to fact-check my dates.
If anyone’s interested in a readable overview of what the practice of esoftware engineering in computational science looked like as of ‘08, this article by Victor Basili et al: Understanding the High-Performance Computing Community: A Software Engineer’s Perspective is a good start.
That’s a good article, thanks!
I’ve done this on and off for years, and I think it’s really useful. It’s great to be able to look back in your notes and remember the context behind a decision you made six months ago, or the exact command you used to generate some important plot.
I currently use org-mode, and have for a few years. For a long time I used it for notes and for tasks, and it was convenient to be able to add trackable tasks inline with notes. This was handy for meetings, etc. However, the lack of good mobile support made me switch to Todoist for task tracking. I still use it for notes, but I don’t have the ideal workflow. I like having things organized by date, but having a new org file for each day is unwieldy.
Prior to org-mode, I’d used VoodooPad ( how I used it ), with a new page for each day and automatic links set up to go back and forth between days. I liked the default linking and native text styling in VoodooPad, as well as image handling - not something I used often but when I did it was great to have.
After mostly moving off Macs for work, I had a similar day-based setup in emacs with notes-mode, which I liked, but it had minimal tooling, and I was lured into using org-mode by the agenda view. One nice thing about that mode was that by convention, it would link pages together by entries titled “Today”, which was a good place to put a todo list, and any reminders.
One downside of starting a new log on each day is that if you had a long-running topic that you wanted to keep adding notes to, you end up splitting it across days, or just editing previous days’ notes. This was one other reason I explored org-mode, since it is more naturally topic-based.
Ideally there would be a way to view the same notes in a topic-oriented view and in a date-oriented view, depending on what you wanted at the time. I would also like a tool that makes it as easy as possible to copy and paste terminal commands and output into my notes, like I could do in emacs with eshell if eshell wasn’t so slow. Ideally it’d be something that was easy to at least add to from various cloud instances I’m working on without a ton of setup. If github made a notes service that synced live, I might use that because I already have to have those creds most places.
Interesting!
On a non-technical note, I got a chuckle out of this honest description of the motivations for the project:
His students noticed how “important and famous” Zaharia suddenly became for creating Spark; Zaharia has since gone on to co-found Databricks and accept an assistant professorship at rival Stanford University.
“So the next generation,” Jordan said, “they said, ‘we’re not just going to just give a project to the systems people. We’re going to do it ourselves.’
All this mac pro discussion reminded me of this custom rackmount enclosure for mac pros: http://photos.imgix.com/racking-mac-pros
They plugged them in to gaskets sideways, to fit them into a rack and keep the intakes in the cool side…
A lot of effort went into that, and not just the idolatrous photo shoot.
In grad school we had a reading club that announced papers a week before we discussed them. I don’t see why we couldn’t do the same on Lobste.rs.
I like this idea too - and this could just be as simple as a weekly post that announces next week’s paper and discusses this week’s paper. Were you thinking it’d need more features than that?
Although it’d be great if there was a way to integrate the annotation style of http://fermatslibrary.com/
Charging your own employees for the things they think they need to be productive feels like a bad way to try to involve the invisible hand of the market in your business.
(e.g. I sure hope you don’t employ anyone who has to spend most or all of their money on things they need to live (like medication, tuition fees, paying off debts) — or anyone who might be ‘disadvantaged’ in any sense of the term — because they might find they suddenly now can’t afford a meeting, even if their work honestly requires it!)
Do you also charge employees for the equipment they need to work? I get the “thought exercise”, but it’s certainly not sustainable, and might be actively harmful.
A few recurring ‘catch-ups’ were cancelled, in favour of on-demand discussions, only “if there was something specific to talk about.”
This sounds like a non-goal to me. In my experience, regular catch-ups are extremely important in keeping everyone on the same page without there needing to be a bad event to trigger it (i.e. two teams only realise they diverge on a spec when they can’t reconcile their changes, as opposed to it coming up naturally), and even more important in keeping the humans involved in the company feeling happy and connected. I hear that’s good for productivity.
Perhaps some internal currency would work. I think the key point here is making explicit that this is a trade-off. Everyone ‘gets’ it, but having a number somewhere, even simply putting on a board with everyone’s name and how many hours they have been in meetings, could be an eye-opener.
I was imagining it would be paid out of the different areas' budgets, ie. Project A has to pay $32 to Project B and C for an hour of time for a few employees. But when I read it I think they literally mean out of the employee’s wallet.
Making employees pay personally sounds like a very bad idea. But having departments or teams pay out of their budget is acceptable since it attaches a cost to using other people’s time without penalizing employees for their personal financial situation. I like your idea of transferring the money within the company. You don’t really “spend” the money. It just gets to be used by another department.
Unfortunately you could just give the money back and forth daily between two companies, it might need a “vat/gst” percentage that goes to the end of year party too, to take some funds out of the system.
I think that idea would probably be just as effective, yes, without it impacting certain people disproportionately.
I remember seeing this idea on 43 folders back in the day - Meeting Tokens - Apparently they were on sale for a time, but the Mule store is gone now, so I guess it didn’t catch on…
In my experience the least productive meetings are the ones called by people who would just ignore tokens anyway, including customers & external partners.
I’m just very skeptical that it could work in most companies. Meetings are about power. Status meetings hold a double entendre, in that they exist to reinforce the (social) status of the person demanding the meeting. A small economic cost isn’t going to fix the flaws in human nature that result in too many meetings.
Sure, in the right kind of culture, it sets a tone of, “Hey, don’t waste people’s time.” On the other hand, in the wrong kind of culture, it will do absolutely nothing and possibly create a new trophy for management-types (“I spent $120 on meetings this week, how’d you do?”) if they follow the policy at all (and most management types will just say “fuck that” to this idea).
Besides, how does one define a meeting? Certainly, one shouldn’t expect the new developer to pony up every time he has to ask for help. That would just be abusive.
So, it seems like this might keep the plebs in line, but it’s not a real protection against management. In fact, the golden-child/“dotted line” types, the ones who tend to delegate and call meetings inappropriately but get away with it because of favoritism, will even be more effective at ingratiating themselves up the chain because they’re sacrificing something (“for the good of the team”) when they call meetings.
I feel like (from this and a wider pattern of posts) is a single mental model to a complex system. It’s not that I think it’s a bad model to have (quite the opposite; I enjoy hearing it and it’s broadened my perspective some) but in situations like this it feels like it drives some pretty wild, unsupported claims.
Meetings are about power
I’ve been in a few of those meetings in my career but it’s been far from the norm.
The idea that blowing up your budget will make you look good because you’re “sacrificing something” sounds like it requires politically competent middle management with financially incompetent upper management. I don’t think that’s a stable configuration.
“Dotted line” types who insist on proper process being followed care a lot about their budgets.
I see where you’re coming from, but I wanted to call this bit out in particular:
regular catch-ups are extremely important in keeping everyone on the same page
My experience has been that, for technical matters, design docs (on some kind of wiki) are way better than meetings for keeping devs on the same page, along with corrective explanations from team leaders or coworkers. For nontechnical stuff, there is some benefit to having a more narrow definition of “everyone”.
For example, there is nothing more frustrating than being in a meeting listening to business people who both clearly don’t know what the fuck they’re doing (in regards to marketing, product design, or fundraising) and who are also repeatedly telling skeptical devs “don’t worry, we got this”.
That’s the sort of thing you can avoid by having like a team lead sit in on, and then report back to his team so they don’t have to lose morale first-hand. The inquisitive souls can find out more out-of-band, and everybody else gets more time to do development work or anything else that’s more fulfilling.
Lately I’ve been writing mostly in Swift using XCode (joined a project that’s in that language/environment), and it’s not easy, because this functionality is simply not supported. One stupid but effective hack: rename the variable to a dummy name and then recompile, which’ll give you an error everywhere the previous variable name was used. Don’t forget to rename back! :-)
As an aside, this is my first foray into development in the Apple ecosystem in many many years, and I’m not super impressed (I’ve owned Macbooks, but only developed cross-platform Unix software on them). Traditionally, Apple had a reputation for at least two good things in their development ecosystem: 1) a coherent, consistent worldview and Appleish way of doing things, and 2) good, comprehensive technical documentation that gave not only bare information like function parameters, but also explained the design and the right way of doing things. So far I’m not finding this to be the case with their contemporary tools. XCode is not a good IDE, and the developer documentation for Swift is quite poor (especially the platform APIs, the core language documentation is ok).
Interesting. I used to spend a lot of my “free time” in XCode, but stopped long before Swift came around.
I’m not too surprised that the usability of Swift is lagging a bit, from my experience working with ObjC and Cocoa frameworks back around 2002-05, the documentation was pretty poor for a lot of their frameworks at first - the reputation you cite was won over years of effort on their part.
Sigh. My Sun Type 7 USB keyboard died last month. It wasn’t ergonomic, had too many keys, but it was quiet, and I loved it. At least I still have the stapler.
I cheaped out big time and replaced it with an AmazonBasics $9 special, and it is awful. Loud and cheap. Key spacing feels weird. Don’t get one. (Their mouse is fine though)
A lot of time you can re-use the switches and matrix by wiring in a custom USB microcontroller if you’re handy with a soldering iron: https://blog.lmorchard.com/2016/02/21/modelm-controller/
why working fewer hours is better for you and your employer
…
Tell you manager “I am going to be working a 40-hour work week, unless it’s a real emergency.”
My first reaction was yikes, 40 hours is “fewer hours”? But then I tried to actually sit down and calculate how many hours I spend doing work.
I’ll typically spend ~30 hours a week at work - in the office. 25 hours are probably the time I spend doing work tasks; take away meetings, which vary in quality, and I’m probably somewhere around 15 hours of core productive time.
But then I tried to add in the time I spend at home shooting emails around, thinking about problems in the shower… it’s a mess.
How do smart people keep track of this? I’d be curious just to get a baseline of how much I actually work.
I’ve used a variety of time-tracking apps over the years, starting in grad school. RescueTime for a while, Emacs' Org Mode time tracking for a shorter while (a little too labor intensive), and while there’s always something interesting to see - I usually find out that I spend more time reading the news in the morning than I think - I think there’s no great way to get a complete picture without a lot of discipline.
If you just want a rough idea of how long in the day you’re at your work computer, something like RescueTime or QBserve is good, but finer grained insight is hard to get and stick with, IMO.
I also noticed that my motivation to track my time goes up when I’m anxious about not getting enough done, and completely goes away when I’m engaged and feeling productive. So I never stick with anything for very long. And interestingly, I never once felt like I needed to do this while working at an office for a big company, even on days when I wasn’t productive. But as soon as I started working from home, I felt like I needed to track my chairtime again.
And interestingly, I never once felt like I needed to do this while working at an office for a big company, even on days when I wasn’t productive. But as soon as I started working from home, I felt like I needed to track my chairtime again.
This is why I’m a little wary of taking up remote job offers.
Hmm, this is interesting. Where I work, we aren’t expected to be keeping track of emails and stuff outside of work hours, and for that reason I rarely bring my work laptop home. Occasionally (once every couple months or less), we might do a server upgrade or major deployment that needs us to be connected remotely for an hour or two during the weekend.
As far as thinking about problems in the shower goes, I try not to allow myself to mull problems over when I get home at night. After work hours, I have other responsibilities, and thinking about my employer’s problems are not one of them.
Out of curiosity, do you work remotely? Is the company you work for small? Neither is true in my case, which I think makes it easier for me to separate work and life.
Fossil got one very important idea right: the repo, wiki, bug tracker, and website are really all part of the same package.
Canonical tried to do with this launchpad and bzr, and Mercurial has a serviceable built-in webserver, but no one else really decided that all of these things were part of the same deal. Nowadays I guess gitlab comes closest, although it does keep git as a separate component, sort of.
I completely agree that having a one-stop web address for everything is incredibly useful, but honestly, the main benefit is deep cross-linking—and there are lots of other ways of achieving that. E.g., Phabricator is actually my favorite in this space: it provides a one-stop shop for bugs, boards (think Trello), asset management, password management, pre- and post-commit code reviews, CI, and more—and supports Mercurial, not just Git. (I’ve been incredibly impressed by its very low maintenance burden, too—something I think is really underrated when people consider their development tooling.) GitBucket, Redmine, Trac, and others would also fit the bill.
But Fossil’s insistence on making everything distributed causes some unique problems. E.g., what’s it mean to merge a bug if I edit it and you close it? Is it closed? Still opened? Reopened? Does your answer change if I tag a commit to the case? Etc. I think this is part of why Vault failed: the user model gets too complicated.
One-stop shops are great, but I’ve never been a huge fan of Fossil-like designs.
But Fossil’s insistence on making everything distributed causes some unique problems. E.g., what’s it mean to merge a bug if I edit it and you close it? Is it closed? Still opened? Reopened? Does your answer change if I tag a commit to the case? Etc. I think this is part of why Vault failed: the user model gets too complicated.
If bugs were a tracked object like code changes, the answer would be easy: user action would be required.
Also, this brings some nice advantages: if a dependency between the patch and the closing of the bug can be expressed (making it impossible to close the bug without merging the patch), at any point, your bug tracker is in sync with your code state.
Fossil got one very important idea right: the repo, wiki, bug tracker, and website are really all part of the same package.
I agree. It is very convenient to have all that features in one binary. I learnt about Fossil in 2010 listening to BSDTalk podcast #194. If I remember well, Richard Hipp talked about his intentions with Fossil: not to become the most popular VCS but serve as an example to others (maybe the popular ones) with its most innovative ideas. He created Fossil to scratch his itch: SQLite development version control. Curiosly enough, Fossil also uses SQLite under the hood.
Here are two more recent interviews (2015):
Too bad the Bugs Everywhere project didn’t catch on. I wanted to see where it would go, especially seeing where Fossil went.
Nah, SD is a far better model. Much like Fossil, it manages distributing a database to everyone, so it “feels” centralized, and the longer you’re offline, merely the more out of date your information and changes are.
I think the “put text files in git” model for bug tracking is a completely whacko way of tracking bugs, and produces really weird side effects. Software can definitely handle the syncing easily for information with this strict of semantics.
Is SD still active? Also what are your experiences with it? The ability to sync between different existing bug tracking systems seems appealing to me.
I was curious too, SD sounds like a good idea at least on the surface. However, it looks pretty dead, their mailing list is empty since 2013, and the repo is dusty: https://github.com/bestpractical/sd/tree/master
It still works, but the “connectors” have mostly bit-rotted (except for a version of the jira one I hacked up about 6 months ago to get working). It’s a very unfortunate end, for it could have been the chosen one :(
That’s unfortunate (I was also interested to see how it worked out), but thanks for pointing SD out.
I’ve used the b extension for Mercurial a few years ago, before eventually switching back to a regular TODO file.
My biggest gripe with be was that it just dumped its database into git, in a format that wasn’t easily usable without the be tooling. Also, it didn’t use any of the particular features of the VC systems, e.g. making a patch a prerequisite for the closing of a bug (as described in my other comment), would have been easy in Darcs.
Sounds like a good time to finally set up my bouncer. If only there were one that had good Emacs compatibility.
I just run weechat on a server and connect to the weechat relay with weechat.el. There’s a few bugs in weechat.el (e.g. nicks go out of sync) and some things missing (e.g. nick list), but that’s a small price to pay for replacing another standalone app with emacs :)
I did this at the beginning but quickly switched over to ZNC because of bugs like that, the inability to have per-client history rollback, and other little details… I still use Weechat half the time on the client side though :) (I also use Textual on macOS, and Palaver on iOS).
Znc is what I use with erc
I’ve been trying to set this configuration up for half a year now, but I never get anything I’m satisfied with. The ZNC documentation is quite bad and confused, imo. And when I manage to set it up, even using ZNC.el it won’t work with IRCnet. Switching between multiple servers is another annoyance.
But maybe I’ve just messed up somewhere.
I used to use znc, seemed to work just fine with ERC.
Now I use weechat (a bit more features, nice Android app), again with ERC. There is weechat.el, but I prefer ERC (connecting to what weechat calls an “irc relay”, instead of using the weechat protocol). I use https://gist.github.com/unhammer/dc7d31a51dc1782fd1f5f93da12484fb as helpers to connect to multiple servers.
Ive used znc with Circe, works great
What did you find in Circe that made it better than ERC or Rcirc?
In case it’s useful - I used to use ERC, and I switched to Circe long enough ago that I can’t exactly remember, but I think the issue was that I wanted to connect to both freenode and an internal IRC server at the same time, and ERC made that awkward or impossible to do. It may well have improved in the last 5 years though.
It was easy for me to setup and use so I stick with it. Never tried those other two