It’s funny, I still remember my designer coworker at the time (at a design agency) get his new Mac with OS X in summer 2001 (like 3 months after release, he had already upgraded at home) and I got to play with it for an hour or two (I suppose that was quite a while before most the hardcore Apple fans I know these days). I only knew OS 9 from a few hours of “check if this website looks good”, otherwise was a Windows/Linux person.
I was absolutely not blown away. Everything looked his nice and shiny and just a little cooler than Windows 2000, and definitely a step up from OS 9. As usual, my gut reaction was absolutely no way to predict the success of a product, I wouldn’t have imagined how people would rave about OS X and how much better it was than anything else on the market, years later. To be fair the “BSD under the hood” wasn’t clear to most of us at that point, not sure I even launched a terminal.
I think I started using vim bindings (not necessarily vim proper most of the time) in earnest around 7 years ago, before that I still used it every time I was on a Linux/BSD console, but not for real work. Since then I’ve always had it enabled in IDEA or QtCreator.
I wouldn’t say I’m using its full potential, I hardly use the non-word modifiers, (from his examples: daw/dw yes, cas never). On the other hand the line handling of normal editors has never really clicked with me (e.g. using pos1/end and shift/up-down to delete and move lines). I also hate hjkl, but I’m not a 10finger touch typist, I have my own weird self-taught few-finger-touch-typing system as many programmers do. (works fine after a week or so with a new keyboard)
I still am planning to -at some point- learn how to properly use emacs, but so far I’m 100% happy with the vim way and I feel a alot quicker than I was before, but I’ve switched editors every few years and some keybindings are different all the time or it might not even have a certain featue (like the multiple-cursor thing for editing many lines, I didn’t know that one before Sublime).
I have my own weird self-taught few-finger-touch-typing system as many programmers do. (works fine after a week or so with a new keyboard)
I’ve never heard this. Do you know of any survey or writeup on that kind of thing?
No, just scour HN or any discussion board for threads about touch typing. I know I’ve read it often and see a lot of coworkers do it. Relatively fast, not looking at the keyboard and definitely not using 10 fingers :)
So yes, anecdata. Maybe many was misguided, sorry.
The attitude to not reflash your device with independent (not even saying anything about FLOSS here) software is just childish. If the device is going to be returned, I’ll just turn it back to stock firmware (+ re-lock the b/l in some edge cases) and it will look like nothing ever happened.
If you care about warranty, don’t worry (at least in EU), it’s not void unless the manufacturer can prove your custom FW did some physical damage to the phone (which is pretty much impossible these days, except Samsung Knox eFUSEs, but let’s not talk about this).
My Nexus 4 once broke. I reflashed the stock OS but I forgot to relock the boot loader. The store refused to warranty my Nexus 4 (fun fact, you can’t ship it to LG. The manufacture pushes the warranty to the store, which is fucking bullshit).
New Zealand has consumer protection laws and my coworkers told me I shouldn’t let the shop get away with it. I had to go to court, had two hearings, and eventually the arbitrator found in my favor and awarded me the $400 for the phone.
It’s pretty bullshit I even had to go through that process. You can install Linux, FreeBSD, etc. on your Windows laptop and not void the warranty. The US FTC reticently put companies on notice for their warranty stickers.
But that’s part of the point I guess. If it’s a work phone and you -like the author- don’t really want to customize it 100% to your needs (with your apps) why bother with flashing another OS?
On the other hand looking at what the end result of that phone was - how will the author use it? Only websites? Only phone calls? Then it really wouldn’t matter to me, with that usage pattern I wouldn’t even have a preference of Android or iOS or Windows Phone I guess.
Its not about the warranty, the root or custom ROM are not allowed by owner of the phone, not allowed during my usage of it.
In other words - not my device not my rules ;)
I’ve read the README twice and I’m still confused. Why would you use or need this? Is it decoupling of sending an receiving mails to a “list”? This lets you send to the list without being subscribed and read it via not-email? Did I get that right?
Also:
If a reader loses interest, they simply stop syncing. How is stopping syncing of NNTP easier than unsubscribing?
These seem to be the advantages:
However to be fair, some mailing lists already do provide mbox files of the archives that you can download and browse with a local email program.
Thanks, especially that second point makes sense. What I usually do is make a bookmark on my bookmarks bar for that and delete it again after a week or month. But I do that so rarely that it’s a good enough hack.
Also I wonder if now (hey, it’s 2018) the amount of people actively using NNTP isn’t such a low number that this is kinda moot. Also I always get the vibe of ML-hate not from people old enough to be staunch NNTP fans, but from younger people. So are the “MLs are ok, but NNTP is better” folks really even less than the “I only use NNTP”, both hardly noticeable against something else? ;)
I consider this a feature:
git clone --mirror https://public-inbox.org/meta/
It creates a local copy of the mailinglist named “meta”. The local copy contains all the metadata you may need to restore the “meta” mailing list if it gets censored, for example.
That guide you linked looks really good, it has a lot of links to official docs, this is the format I love.
I always hear praise for the OpenBSD man pages and I don’t want to diss them, but I’m really, really bad at digesting information in this form. Take for example https://man.openbsd.org/pf.conf.5 and https://man.openbsd.org/pfctl.8.
It may contain everything one would want to know, but the dots don’t connect. So my usual way for anything I’ve never worked with would be:
I know different people need different things, but give a me a list of facts and I’m still clueless. But by tutorial I don’t mean those ultra-introductory Ubuntu tutorials you find everywhere - I’ve used a lot of flavors of Linux and BSD over the years, I just need the correct pointers, neither a basic howto nor a list of a million things I’d need months down the road.
Another point: man pages usually are detailed descriptions of one thing, sometimes with helpful cross links, but they usually don’t show you how to connect the dots - or the order in which to proceed - that’s what good guides tell me. Not really how to do it exactly but how to proceed with which tools - in interactive glossary so to speak
Does Linux even have a real market share?
Some quick googling gives me https://netmarketshare.com/linux-market-share - it says < 3% and the other results also all say < 5%. And I think Ubuntu is “kinda mainstream” at least as much as it gets.
Sure, 5% is a lot more than 0.5% but maybe the people using Linux at work still have to fight enough to not have to use Windows or OS X, in most positions it’s not worth the fight. Even in my most “I can do whatever I want” positions I still had a pretty hard requirement on either VirtualBox for local virtualization or writing Java code or using Google Hangouts daily (ok, these last two might work flawlessly on FreeBSD) or some other stuff that only works on Linux without pain. In my current job it’s again some stuff I’m not sure I could get to work easily on FreeBSD, and I’d say I’m a lot closer to using a BSD again than others who have no experience and have never tried it.
Not that I’m against federation but so far my experiences weren’t great. For email I can change servers on a whim, or at least do forwarding only with a very simple SMTP, same for XMPP - for Mastodon on the other hand… meh. Realistically it’s running your own instance forever or change handles or join one of the big ones where you don’t own your identity. I actually prefer non-federated systems where I kinda trust the admins more than bad federation (based on my criteria).
Changing servers for XMPP or SMTP also usually involves “changing handles”… unless you mean pointing domain at new server, which you could do for Fediverse as well…
Really? Last I checked Mastodon didn’t really support it and also, how would you even find an instance that accepts your “hostname” and thus would automatically host your users (sure, sometimes it’s only one).
I’m running pleroma now but used to run Mastodon and even if it’s theoretically possible I’d still say it’s miles more hassle than email. Sure, if I had friends I could just ask to “care about my domain”.. but hardly anyone runs those. So, yeah, maybe it’s “just” a numbers game and everything’s not shaken out or still badly documented, but especially the forwarding part would be nice and afaict doesn’t really make sense in this context.
Mastodon semi-recently added profile redirects. They don’t actually forward your messages or followers or anything, but they’re at least an official way of marking an account as having moved elsewhere. Partial solution obviously, but thought I’d note it, because I only recently found out it existed.
Oh, instances people are running may not currently support it, similar to how most public mailservers won’t let you host a custom domain with them.
This is a social problem, and one worth working on.
Actually I think the only thing I’m not self-hosting is one Wordpress blog at wordpress.com (don’t want to have my real name associated with it, but it’s only a gaming blog, nothing super secret).
What has your experience hosting your own email been like? I’ve idly considered it, but it’s a famously unfriendly service to deal with (spam, major providers deciding your messages are spam, all the onerous metaprotocols to combat spam) and I’m happy with Fastmail’s service.
I’ve been hosting email myself for 15+ years. Postfix made it easier to configure (Sendmail was… complicated, in comparison, in my opinion.) Dovecot works really well for IMAP/POP3. Finally Let’s Encrypt allows you to get a nice certificate relatively easily.
Greylisting helped a lot to reduce spam, but spam is still a nuisance - especially if you don’t have good filtering in your mail client (I’m using crm114).
Setting up SPF, DKIM and DMARC can be a little complicated, but it seems to work fine, as long as all email from your domain is sent from a well defined set of IPs.
I’ve not had many problems, but there’s a bit of luck of the draw in getting a clean IP. I have SPF and DKIM set up (not DMARC), with the self-signed certificate that Debian auto-generated, and that seems to be enough to get mail delivered to the big providers.
For incoming spam, I reject hosts that: 1) lack valid RDNS, or 2) are on the Spamhaus ZEN RBL. This seems to catch >95% of spam. Minor config hint if you’re using the free Spamhaus tier: you need to set up a local DNS resolver (I use unbound) so you query directly, otherwise your usage gets aggregated with whoever else is using your upstream DNS, which probably exceeds the free tier.
Like the other commenters, I use Postfix, which is reasonably nice, and has good documentation.
Mostly positive. I had that discussion this morning on IRC, so I’m gonna quote myself to not retype everything:
[...] on a "decent" hoster blacklist-wise and not DO or something
and it's been running for 10 years, I don't seem to have the typical
"gmail thinks I am spam"-problem
usually.
interestingly I had it yesterday when sending something to myself
but dunno, empty body, 25mb-video.. who knows
I hardly use my gmail-account
But thinking about it, sending a job application in November ended up in the spam folder for 2 people and I only got a reply once I pinged them out of band. That was a shitty experience, but as I hate using GMail I prefer this to a years-long shitty experience using it :P
If I was to “start over” these days I might go to a dedicated email hoster like FastMail, but I think it’s just too expensive. I have 4 people with their main email addresses on my server and it costs me 10 EUR /month and I get to host other “communication” services as well. For FM it would be 15-20 USD per month and I still haven’t found out if I could use “a low amount of” domains and not just “use your own (one) domain”. Sure it takes some maintenance work, but it’s part hobby, part learning experience and part keeping in touch how stuff is done as it touches on my dayjob, depending on which role at what company I do. (Been running my own mailserver for roughly 15 years I guess)
if I could use “a low amount of” domains and not just “use your own (one) domain”.
You can, I have 5 domains * under one, one-user account. It’s explicitly spelt out here: https://www.fastmail.com/help/account/limits.html
№ domains 100, plus 1 for every user in the account
* – One with my AFK name, and four domain hacks, of which I have a guilty pleasure of buying ;-)
This all sounds so nice and falls so damn flat if you’re not operating at scale.
Simplest example: To have Chaos Monkey be useful you first have to have at least n+1 instances running (with n being the regular thing)
Sure, if you’re maybe hosting a web app it’s easy to just put another frontend node there or another load balancer, but especially for some databases it gets hairy real quick (master/master vs master/slave vs cluster). And getting all your infrastructure to HA (whatever amount of nines be your target) is so often just not worth it, especially at a not-yet-profitable startup.
I’m still waiting for a [any sensible/modern language]-to-bash transpiler that will be nice enough to use to become widespread.
Me too. I might attempt this at some point as an extension to shellharden.
Possibilities for a sensible/modern language:
$var → "${var[@]}"), but so be it. I’m a daily fish user, btw.KeePassX and syncing it one-way via SyncThing (i.e. I only ever edit the master and periodically sync downstream to laptops, phones etc). For generating new passwords on the non-main computer I do whatever method of remembering it and then manually enter it there. Good enough for me.
I only remember what was used on the rainbow imacs and G4 towers just before OS X hit in ~2000/01. Probably OS 9, maybe OS 8. It wasn’t fun, they were constantly crashing - and I was only using them twice a day to check how my HTML looked in Internet Explorer 5.5 for Mac.
I’ve been using Matrix as a glorified IRC bouncer for over a year, it’s pretty good, but Synapse still occasionally chokes on “forward extremities” and becomes completely unresponsive so you have to run a SQL query to clean up and wait for a while for it to become responsive again :(
worst offenders seem to be IRC-bridged rooms with a high join/part turnover. Such as #mozilla_#rust:matrix.org, #mozilla_#rust-offtopic:matrix.org, and #haskell:matrix.org
Riot-web has been fast enough for me, but I prefer Fractal, because GTK :)
Bridges are also choking (and gettign out of sync) in low/moderare-traffic 200 user channels where 90% don’t rejoin because bouncers. I still haven’t really seen an advantage.
It’s one of the big issues where no alternative for IRC really exists yet.
Riot also starts choking once the rooms grow over a few thousand memberd that join and part constantly — while even the simplest IRC clients handle it fine.
It’ll be interesting to see how this develops in the next years, but for now it looks like the time for Matrix to replace IRC isn’t just quite ready yet.
From the client/user point of view, riot is certainly as optimal as it is subotimal. It is fairly usable and nice, but also incredibly ressource hungry and slow at times. I would like to see more native clients (in particular console clients), but this would certainly increase friction in terms of client support for features and changes.
This also extends to the operational point of view: It’s not just that matrix/synapse is simply slow at times, it’s that the design is by default way more ressource intensive than IRC. An ircd requires basically nothing in terms of ressources to serve quite a seizable number of users. synapse on the other hand requires quite a lot of CPU power in addition to metric ton of space in it’s database (especially if your users join large rooms). Joining the main matrix channel is almost certain to cause hours of full CPU usage and increase the db size by a few hundred MB.
Of course matrix and irc provide different featuresets, but right now I feel that matrix may never be ideal for large group chats simply by design. I can’t quite see how rooms like the matrix main channel will ever be “ok” for a matrix server.
All this being said, matrix works nicely for one-on-one and small group chats, which is what most of my users do.
The actual design of the Matrix spec doesn’t have any issues that I have seen but the current software seems more like a prototype in production. Hopefully dendrite and some updates to riot can speed everything up because thats one of the main issues I see with it now.
Yeah, that’s what I’ve seen so far, too. The spec is great, but the implementation is rather meh. Which means that at least it should be easy to fix later on.
The spec does require a lot more resources than IRC, though, specifically in the form of maintaining logs and allowing searching of them. I wouldn’t be surprised if there are other implementations/settings that come out to auto-kill logs after a month or something (I don’t think that necessarily violates the spec and is pretty handy for GDPR)
We also do log storage and fulltext search in the Quassel bouncer (and its ecosystem), and yet we don’t have nearly as much performance issues as Matrix does.
This is mostly an implementation problem, I’m sure it can be fixed over the years.
I have been using fractal as well. I like the gui but it does seem to use a high amount of CPU usage. Also doesn’t support end to end crypto yet.
Just tried Fractal on Mac OS. Amazing (and a bit horrible) that it looks exactly like Gnome. Perhaps somebody (me?!) will make a decent version in the future, though.
Not very helpful non-answer incoming, more like a “me too” ’;)
My go-to conference every year (since 2013, with a break) is FOSDEM, so I’m planning this for 2019.
I went to PolyConf in 2017 and loved it, but apparently it’s not happening this year :( (still hoping, maybe in the late fall?)
So I’m also a bit unsure if there’s something interesting for me, I’m not doing PHP anymore (the Unconference in Hamburg was alwyas excellent) and I’m not doing Go anymore…
Chaos Communication Congress would be nice, but I’ve no time for it this year.
I totally forgot about FOSDEM, what a great conference for open source software. But I was not too happy with the technical infrastructure of the ULB in Brussels, often microphones didn’t work or the beamer resolution was to small to make meaningful demos. This is the reason I did not went there this year but I will give it a second chance in 2019.
Despite all Node.js-related content in this article, is anybody here using Snap or Flatpak already?
I used FlatPak to install one or two Desktop apps but I wasn’t impressed. That could be the packagers’ fault or the system’s.
All real problems aside t’s also a bit annoying as you seem to have to run the applications with a long command line that I kept forgetting, maybe just providing a directory with shims/wrapper scripts with predictable names would’ve gone a long way (I mean, /usr/local/bin might be debatably ok as well)
My solution for non-GUI-heavy things so dar has been nixpkgs - so I can for example run a brand new git or tmux on Ubuntu 16.04.
You might also be interested in checking out Exodus for quickly getting access to newer versions of tools like that. It automatically packages local versions of binaries with their dependencies, so it’s great for relocating tools onto a server or into a container. You can just run
exodus git tmux | ssh my-ubuntu-server.com
and those tools are made available in ~/.exodus/bin. There’s no need for installing anything special on the server first, like there is with Snap, Flatpak, and Nix.
Thanks, I’ve heard about exodus but I think it’s a bit of a hack (a nice one though) and first I’d need to have those new versions installed somewhere, which I usually don’t :)
I’m actually a big fan of package managers and community effort - just sometimes I’m on the wrong OS and would have certain tools in a “very fresh” state - so far nixpkgs is perfect for me for this.
I use snap for a few things, and have even made a classic snap or two of some silly personal stuff. They seem to work fine, but ultimately feel out of place due to things like not following XDG config paths. They also get me very little over an apt repo, or even an old-school .deb, since most of the issues (e.g. you must be root) remain. Generally speaking, given that Linux distros already have package managers, I’m more interested in things like AppImage, which bring genuine non-package but trivial to install binaries to Linux.
(What I really want is to live in a universe where 0install took off, but I think that universe is gone,)
Reminded me I’ve had Google Analytics code up on my blog since forever for no benefit for me whatsoever. Off it goes!
Kudos for removing it but I am curious how Google Analytics ends up running on so many sites to begin with?
It’s free, it’s very easy to setup and understand, and there is a lot of documentation out there on how to integrate it into different popular systems like Wordpress. It’s definitely invasive, but it’s hard to deny that it’s easy to integrate.
not as easy as doing nothing though… it’s free and easy to crawl around on all fours… that can be invasive too if you crawl under someone’s desk… but this still leaves the question why.
Because a lot of the time when you’ve just made a site you want to see if anyone’s looking at it, or maybe what kind of browsers are hitting it, or how many bots, or whatever, so you set up analytics. Then time passes, you find out what you wanted to find out, and you stop caring if people are looking at the site, but the tracking code is still there.
I’d compare it to CCTV cameras in shops. You visit the shop (the website) voluntarily so the owner can and will track you. We can agree that this is a bad thing under certain conditions, but as long as it’s technically trivial it will be done. No use arguing what is, you’d need a face mask or TOR to avoid it.
That said, I’d also prefer if it wasn’t Google Analytics on most pages but something that keeps the data strictly in the owner’s hands. I can wish for it to be deleted after a while all I want but my expectation is that all the laws in the world won’t change that to a 100% certainty.
End-user-facing SaaS products are one thing. On a site I run on infrastructure that I run myself I can just look at the httpd logs¹ and doing so is way faster than looking at GA², but if I also bought a dozen other random SaaS products then the companies that run those won’t ship me httpd logs, but they will almost always give me a place to copy-paste in a GA tracking <script>. If I have to track usage on microsites and my main website, it’s nice if the same tracking works for all of them.
It has some useful features. I believe offhand that, if you wire up code to tell it what counts as a “conversion event”, GA can out the box tell you things like “which pages tended to correlate positively and negatives with people subsequently pushing the shiny green BUY NOW button?”
There’s a populace of people familiar with it. If you hire a head of marketing³, pretty much every single person in your hiring pool has used GA before, but almost none of them have scraped httpd logs with grep or used Piwik. (Though I would be surprised if they didn’t immediately find Piwik easy and pleasant to use.) So when that person says that they require quantitative analysis of visitor patterns in order to do their job⁴, they’re likely to phrase it as “put Google Analytics on the website, please.”
(¹ GA writes down a bunch of stuff that Apache won’t, out the box. GA won’t immediately write down everything you care about because you have to tell it what counts as a conversion if you want conversion funnel statistics.)
(² I have seriously no idea whatsoever how anybody manages to cope with using GA’s query interface on a day to day basis. It’s the most frustratingly laggy UI that I’ve ever used, and I’m including “running a shell and text editor inside ssh to a server on literally the opposite side of the planet” in this comparison. I think people who use GA regularly must have their expectations for software UI adjusted downward immensely.)
(³ or whatever job title you give to the person whose pay is predicated on making the chart titled “Purchases via our website” go up and to the right.)
(⁴ and they do! If you think they don’t, take it up with Ogilvy. He wrote a whole book and everything, you should read it.)
The book is “Ogilvy on Advertising”. It’s not long, the prose is not boring and there are some nice pictures in it.
The main thing it’s about is how an iterative approach to advertising can sell a boatload of product. That is, running several different adverts, measuring how well each advert worked, then trying another set of variations based on what worked the first time. For measurement he writes about doings things like putting different adverts for the same product up, each with a different discount code printed on it, and then counting how many customers show up using the discount code that was in each of those adverts. These days you’ll see websites doing things like using tracking cookies to work out what the conversion rate was from each advert they ran.
Obviously the specific mechanisms they used for measurement back then are mostly obsolete now, but the underlying principle of evolving ad campaigns by putting out variations, measuring, then doubling down on the things you’ve demonstrated to work is timeless.
Ogilvy also writes a little bit about specific practical things that he’s found worked when he put them in adverts in the past, such as putting large amounts of copy on the advert rather than small amounts, font choice, attention-grabbing wording, how to write a CTA, black text on white backgrounds or vice-verse, what kinds of photos to run and so on. Many are probably still accurate because human beings don’t change much.
Many are plausibly wrong now because the practicalities of staring at a glowing screen aren’t identical to those of staring at a piece of paper. If you’re following the advice to in the first bit of the book about actually measuring things, then it won’t matter much to you how much is wrong or right because you’ll rapidly find out for yourself empirically anyway. :)
Hypothetically, let’s say you’ve done a lot of little-a agile software development: you might feel that the evolutionary approach to advertising is really, really obvious. Well, congratulations, but not all advertising is done that way, and quite a lot of work is sold on the basis of how fashionable and sophisticated it makes the buyer of the advertising job feel. Ogilvy conveys, in much less harsh words, that the correct response to this is to burn those scrubs to the fucking ground by outselling them a hundred to one.
For me it was probably ego-stroking to find out how much traffic I was getting. I’ve been blogging for more than a decade and not always from hosts where logs were easily accessible.
What gets me is why people care about how many hits their blog gets anyway. If I write a blog, the main target is actually myself (and maybe, MAYBE, one or two other people I’ll email individually too), and I put it on the internet just because it is really easy to. Same thing with my open source libraries: I offer them for download with the hopes that they may be useful… but it really means nothing to me if you use it or not, since the reason I wrote it in the first place is for myself (or again, somebody who emailed me or pinged me on irc and I had some time to kill by helping them out).
As such, I have no interest in analytics. It… really doesn’t matter if one or ten thousand people view the page, since it works for me and the individuals I converse with on email, and that’s my only goal.
So I think that yes, Google Analytics is easy and that’s why they got the marketshare, but before that, people had to believe analytics mattered and I’m not sure how exactly that happened. Maybe it is every random blogger buying into the “data-driven” hype thinking they’re going to be the next John Rockefeller in the marketplace of ideas… instead of the reality where most blogs are lucky to have two readers. (BTW I think these thoughts also apply to the otherwise baffling popularity of Medium.com.)
Also, it’s invasive, sure but it’s also fairly high value even at the free level.
You get a LOT of data about your users from inserting that tracking info into your site.
Which leads me into my next question - what does all this pro-privacy stuff do to such a blog’s SEO?
(I know, I know, we’re not supposed to care about SEO - we’re Maverick developers expressing our cultural otherness and doing Maverick-y things…)
Oh, it totally tanks SEO.
Alternately, the SEO consultants that get hired by biz request to have GA added anyways and they force you to bring it in. :(
It sounds silly to seriously try writing a new UI library. It would take an enormous amount of time which I didn’t have with a full-time job.
That’s been the problem I’ve seen with half of the UI libraries I’ve looked at. It is silly. “Halfway” is usually not good enough. If a key part you need for your interface (let’s say “dropdown boxes”) is not implemented or buggy it’s sometimes (for small projects) easier to just switch libraries instead of now becoming part of the upstream of your UI framework :(
That said, it looks good and I hope it gets somewhere :)
Wanted to try it but it’s a bit of a hassle to install in the way you provide it. See E.g. https://pinboard.in/howto/#saving for how you can get people to try it more easily (just drag the link to the bar) - also you could minify the JS for this case.
Also it doesn’t do anything for me on Chrome :( Plus you should probably use https for the google URL.
Bit late to the party, but I’m in an IRC channel where a good portion of people have been using Riot due to an available bridge and seem pretty happy. Whereas I only see the problem of constant “hey, can you read me, I don’t see your messages” or the 12h lag which totally emulates videoconferencing. Might be better on mobile if you’re on the run like on a conf (where we tried everything from IRC to Hangouts Chat to Signal.. I think Signal was best so far to keep ~10 people synced over the weekend…)
Lumping together Bootstrap and Angular as Web frameworks was definitely an interesting choice…