PSA: The gmail HTML interface still exists and works fine. Even in your favorite text browser like links+ (links2 in some package managers).
Thanks for painting the picture. I knew about the same bits and pieces of this problem but never really grasped it.
This should be fixable, no? I might hack at it this weekend. Or this evening because it’s bugging me now.
There are two bugs really.
https://github.com/mpaperno/spampd/issues/19
rcctl directly? There’s gotta be a reason for that behaviour. That’d probably involve hitting up their mailing lists or getting on IRC. I might jump on IRC later today.1, SpamPD is pretty small and doesn’t explicitly do anything with HUP. The processing, and the logging of the restart, is happening somewhere in Net::Server(::PreForkSimple). SpamPD might need to capture HUP to control the restart of the server.
Given that most popular email clients these days are awful and can’t handle basic tasks like “sending email” properly
I agree with the sentiment in general, but once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right, then maybe it’s time to acknowledge that the times have changed and that the old way has been replaced by the new way and that maybe it is you who is wrong and not everybody else.
And I’m saying this as a huge fan of plain-text only email, message threading and inline quotes using nested > to define the quote level.
It’s just that I acknowledge that I have become a fossil as the times have changed.
once you’re in the position where everybody else does it wrong and you’re the last person on the planet that does it right
Thankfully we haven’t reached this position for email usage on technical projects. Operating systems, browsers, and databases still use developer mailing lists, and system programmers know how to format emails properly for the benefit of precise line-oriented tools.
I acknowledge that I have become a fossil as the times have changed
If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them? I’m not saying we should refuse to cooperate on interesting new projects simply because they use slightly worse development processes. But we should let people know about the existence of other tools and ways to collaborate, and explain the pros and cons.
If the technology and processes you prefer have intrinsic merit, then why regretfully and silently abandon them?
Because when I didn’t, people were complaining about my quoting style, not understanding which part of the message was mine and which wasn’t and complaining that me stripping off all the useless bottom quote caused them to lose context.
This was a fight it didn’t feel worth fighting.
I can still use my old usenet quoting habits when talking to other old people on mailing lists (which is another technology on the way out it seems), but I wouldn’t say that the other platforms and quoting styles the majority of internet users use these days are wrong.
After all, if the maiority uses them, it might as well be the thing that finally helped the “other” people to get online to do their work, so it might very well be time for our “antiquated” ways to die off.
I’d like to try to convince you that it’s _good* that plain text email is no longer the norm.
First, let’s dispense with a false dichotomy: I’m not a fan of HTML emails that are heavy on layout tables and (especially) images with no text equivalents. Given my passion for accessibility (see my profile), that should come as no surprise.
But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people. As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people. A hyperlink with descriptive text, where the URL is available if and only if the reader really wants it, is more humane.
For longer emails, HTML is also good for conveying the structure of the text, e.g. headingsg and lists.
Granted, Markdown could accomplish the same things. But HTML email actually took off. Of course, you could hack together a system that would let you compose an email in Markdown and send it in both plain text and HTML. For folks like us that don’t prefer WYSIWYG editors, that might be the best of all worlds.
But HTML emails are good for one thing: providing hyperlinks without exposing URLs to people.
That doesn’t come without a huge cost. People don’t realize that they need to know the underlying URL and don’t care to pay attention to it. That leads to people going places they didn’t expect or getting phished and the like.
Those same people probably wouldn’t notice the difference between login.youremail.com and login.yourema.il.com either, though. So I’m not saying the URL is the solution but at least, putting it in front of you, gives you a chance.
As much as good web developers aim for elegant URLs, the fact remains that URLs are for machines, not people.
I’m not sure about this… at least the whole point of DNS is to allow humans to understand URLs. Unreadable URLs seem to be a relatively recent development in the war against users.
Not only do I completely agree with you but you are also absolutely right about that.
Excerpt from section 4.5 of the RFC3986 - Uniform Resource Identifier (URI): Generic Syntax:
Such references are primarily intended for human interpretation
rather than for machines, with the assumption that context-based
heuristics are sufficient to complete the URI [...]
BTW, the above URL is a perfect example of how one should look like.
Personally, I hate HTML in email - I don’t think it belongs there. Mainly, for the very reasons you had just mentioned.
Let’s take phishing, for example - and spear phishing in particular. At an institution where I work, people - especially those at the top - are being targeted. And it’s no longer click here-type of emails - institutional HTML layouts are being used to a great effect to collect people’s personal data (passwords, mainly). With the whole abstraction people cannot distinguish whether an email, or even a particular link, is genuine.
When it comes it the structure itself, all of that can be achieved with plain text email - the conventions used predate Markdown, BTW, and are just as readable as they were several decades ago.
are these conventions well-defined? is there some document which describes conventions for stuff like delimiting sections of plain text emails?
are these conventions well-defined? is there some page which describes conventions for stuff like delimiting sections of plain text emails?
It’s just that I acknowledge that I have become a fossil as the times have changed.
Well, there are just too many of us fossils to acknowledge this just yet.
Not mentioned, but my favorite was Northern Light for it’s great boolean expressions. None of this fuzzy “maybe this is what you meant” searching.
http://www.searchengineshowdown.com/features/nlight/review.html
I am curious what happened. The situation reported so far is compatible with CEO receiving large monetary offer for NOBUS backdoor from NSA.
Or a large offer from a company wanting their brand and I.P. whose values are inconsistent with the CTO’s. I wasn’t able to eliminate that possibility. So, there’s a few options which include questionable acquisitions and spy agencies. There’s also the possibility that the CEO is just an egomaniac asshole or something. On occasion, I read about people destroying their own careers or businesses for personal reasons nobody can follow.
The situation is also compatible with the CEO being in the right. We’re only hearing one side of this dispute, and it’s from someone who’s admitted not signing agreements to have the company own the work they’re paid for.
I developed most of CopperheadOS on my own time, including before Copperhead existed as a corporation. I have no employment agreement or copyright agreement with Copperhead. I own a substantial portion of the code, possibly most. Copperhead has no license to use it commercially.
From this, it even sounds like Copperhead Inc doesn’t have a license to sell phones running CopperheadOS, which I think is its primary source of income. That’s an untenable situation.
It seems most likely to me this is a typical personality or priority conflict that implodes small businesses regularly; an outside actor like a business or the NSA is unneeded to explain this, especially given the unflattering way either has acted in public about this.
The not signing agreements is shady as hell. The CEO showed up on HN here, later deleting some comments. That link preserved them. So, that should give you an idea of how he presents himself in this situation. I agree personality conflict is a good, default assessment. Happens all the time.
The CEO controls the IRC channel. The sentiment there is in support of the CEO. But he also seems to be banning anyone who supports Daniel and has now made the channel moderated.
I’m unclear if you need to build an infrastructure to support existing software/hardware tools or if you need software (and maybe devices) that is tolerant to the conditions of the infrastructure you have available.
This was recently posted to lobste.rs
http://www.lowtechmagazine.com/2015/10/how-to-build-a-low-tech-internet.html
I like Secure Scuttlebutt. There is also maybe Briar:
SSB and, I think Briar, support building applications on top of the protocol.
HAM, has rules (laws) that may or may not be an issue in places like Africa. For example, in the US, transmissions cannot be encrypted. They can be encoded in a publicly available protocol, but anyone must be able to decode it. If you have those constraints in-country, the data might be of a nature that you wouldn’t want to be transmitting it to the world.
Its existing tools and software (I’m the developer). Essentially what i’m looking to get out of this post is a slew of ideas for what we can do when the best case scenarios for getting data out of a remote place fall down.
Right now the core of what we’re doing is selective syncing of data where the user has control over when they sync their data from the phone or desktop application. We’re moving to something similar to scuttlebutt but implemented on top of zyre on top of 0mq for the desktop application and trying to sort out a similar implementation on android where we can piggyback data through multiple nodes to the end central points.
I actually jumped into ##hamradio on freenode and asked some questions (and got what felt like a little bit a ribbing for not knowing enough about HAM radios) but the gist of what I got out of the conversation was that it’s probably only worth doing if you licence a frequency for use and even then it’s very complicated. I thin kit’s still worth digging into, just not something that could be implemented on a short turnaround time.
What I’m sort of looking at is all these amazing answers and pointers I’m getting from you guys and trying to sort of group and organize them and of the things that we can implement now, implement them so we have multiple modes per platform (ios, android, osx, windows, linux) for getting data to where it needs to go, the criticality in terms of timeframes for what we’re involved in means we need that data as soon as we can for decision making.
I actually jumped into ##hamradio on freenode and asked some questions (and got what felt like a little bit a ribbing for not knowing enough about HAM radios) but the gist of what I got out of the conversation was that it’s probably only worth doing if you licence a frequency for use and even then it’s very complicated.
I’m not one but the ham’s I know are a kind of stodgy but knowledgeable bunch who expect you to have done your reading. Anyway, radio might not be out of the question and will depend on local regulations. In the USA (not true everywhere, for example Canada) there is MURS which requires no license, operates over the 151–154 MHz range (formerly business radio, so inexpensive transceivers), can be used for digital transmission, and can achieve several miles. At least in the USA packet-forwarding/repeating is prohibited which limits some interesting uses. You’ll need to ask someone who has local expertise as to what is permitted in each country.
Looks interesting. I like the p2p aspect of it as well as the concept of other interactions besides just chat (although, right now much of it is variations on chat).
Also similar to Secure Scuttlebutt, which, as far as I know, doesn’t have a solid phone client yet.
The big annoyance was that back in the day, with posting boards and guestbooks being all the rage, (or just a badly made web page) people would open a blink tag and never close it and the browser would render everything after it on the page as blinking.
In a similar vein: U+202E “RIGHT-TO-LEFT OVERRIDE”
These days you can annoy everyone with gif images though ;-) Which sadly regularly happens on Github issues of major issues / bugs.
But at least it doesn’t have a spillover effect of turning every post that comes after it into an animated gif meme version of itself (as long as no one makes “memification” a feature of markdown, aynway).
as long as no one makes “memification” a feature of markdown, aynway
Goes off to register memedown.com and apply to a startup accelerator.
Client or server? SMTP servers already include retry. I feel like every client I have ever used will retry once networking is restore if it was down when you send.
The sender MUST delay retrying a particular destination after one attempt has failed. In general, the retry interval SHOULD be at least 30 minutes; however, more sophisticated and variable strategies will be beneficial when the SMTP client can determine the reason for non-delivery.
Retries continue until the message is transmitted or the sender gives up; the give-up time generally needs to be at least 4-5 days. The parameters to the retry algorithm MUST be configurable.
A client SHOULD keep a list of hosts it cannot reach and corresponding connection timeouts, rather than just retrying queued mail items.
Shouldn’t they all be, in theory? (At least, if you’re using pop3?)
Back in the day when email was distributed via nightly uucp syncs between individual pairs of machines, it certainly was delay-tolerant. But, at that time, I’m pretty sure email addresses also had routes in them.
Just envision how complex would it be to deploy to mesos/kubernetes without the container abstraction…
Nice and complete article!
I was at Twitter when we deployed the first production deployment of mesos. It did not use “containers” as they are understood in the docker sense. We shipped binaries, there was a shared filesystem you could see if you logged into one of the machines your service was running on, and mesos set up service discovery to connect the local ports you were randomly assigned at boot time in an (iirc) env variable.
For mesos, and, I believe, borg before it, containers came after to make things easier.
containers came after to make things easier. That’s the whole point of it! Obviously you can do without, but like I said, imagine how complex it would be!
I’m not speaking about the configuration details but the whole concept. Can someone point me to a technology that correctly and easily describe the runtime convention such as port binding, volume mounting, but as the same time enable extensibility and security?
Sûre many tools exist to do things here and there, but none use them offer a coherent solution to build on top of.
Sure, containers enabled this paradigm. But it’s not as if networking, data storage/access, scaling, and security didn’t exist before containers. They were just handled differently with different technologies.
Does kubernetes make this easier/better? Probably depends who you ask. And for my take, you still have to run kubernetes on something. That something still has the same needs/requirements as always. (This is where you introduce the cloud layer, or someone else’s computer.)
Does kubernetes make this easier/better?
Similarly to how containers apply the benefits you can easily get from the Maven model to any runtime, I’ve always thought that kubernetes was just Greenspun’s Tenth Law but about the benefits of Erlang.
Honestly, I think it wouldn’t be too bad.
Artifacts would be shipped around as e.g. .zip files downloaded from HTTP servers, instead of containers pulled from container registries. Those artifacts would be tagged with metadata to indicate which runtimes they require, and nodes in the cluster would be tagged to indicate which runtimes they provide; the scheduler would schedule jobs based on those constraints. Kubernetes, at least, already supports this. On balance, I think dropping containers for this aspect of things would actually make the overall system simpler.
Resource limits and namespaces would need to leverage underlying OS primitives directly, rather than going through the container abstraction. Probably a proto-container spec (like OCI?) would arise from these requirements naturally. This is where things would get more complex.
In terms of a framework that would allow you to programmatically deploy (virtual) machines, get applications installed then configure those and start them, it’s basically what I built at my last job. I didn’t get onto any advanced features like automatic load distribution and automatic machine setups when the environment needs more resources but it was completely possible on the back of the basic features as well as the fact that the system did set up distributed (peer to peer and mastered) services together with their own encrypted virtual networks.
I don’t know what other advanced features are possible in Kubernetes/Mesos that wouldn’t work without containers but AFAIK the security isolation is still better for VMs than containers and the networking should be easier since it’s simplified by not having a machine effectively be a router for the containers.
[Comment removed by author]
If the forum allows it, anyone who can link an image in their signature is “tracking” users and has access to this information.
The 600MB file, I’d agree with, though.
By the way, it was pushcx himself who replaced the big image with an humerous remark. Might not have been the brightest idea to put it there in the first place.
The lack of response or action from @pushcx is sad to say the least.
He was there when it happened. They saw the picture, people joked on it, pushcx removed it, put his own comment on it into my signature, i liked it, other people liked it, i kept it. Some people had a good laugh. At this point, i was still assuming that most lobste.rs users were on desktop.
After compiling the statistics, i felt like, “Oh shit”. Mistakes were made. I can’t turn that back now.
You should have been there when it happened, then maybe you would have an different perspective on it. I dont want that pushcx now gets shit from people missing context. Mistakes were made.
Just because @pushcx was “there” when it happened doesn’t mean that it’s OK. You abused the trust we all have in this website and I’m starting to feel like @pushcx is abusing my trust in him as the sysop to act fairly across the board. Not only did you pry into the privacy of users you wasted their time, money and energy doing so.
users weren’t required to download his tracking pixel. they chose to run software that would download it by default. i consider this a lesson about the state of our software ecosystem.
This is a strawman. Every browser behaves this way. What is the lesson supposed to be? Do not trust lobste.rs and move to a better community?
are you using the term strawman to refer to any argument you disagree with? or did i actually construct some sort of strawman?
lynx doesn’t behave this way. firefox doesn’t behave this way, with 3rd party images disabled in matrix. the tor browser would not leak data this way. the lesson is that the web is a hostile environment because we allow it to be. if we all used more secure browsers, sites that are broken by the security features would lose traffic. but we allow it to happen.
I try to use it as little as possible and when I use it, I consider it a hostile attacker that I don’t trust.
If at some point there will be a bitcoin miner on the site, I won’t consider myself betrayed by anyone, as nobody made any promise to me, nor I expected anything from anyone. I will simply move on with my life. If I am concerned about blowing through my data allowance, I won’t visit radom websites in the first place.
It seems that currently there aren’t any javascript bitcoin miners here on this site, but I have no expectations that there won’t be any tomorrow or some other day.
Probably worth probation for a week or two.
Hey, if we are doing the 2000s BB thing, let’s go all in! ;)
[Comment removed by author]
“Embrace, extend, and extinguish […] is a […] strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to disadvantage its competitors.”
Is that actually what’s happening here? Is XMPP actually widely used anymore? I’d bet Lync/Skype for Business users greatly outnumber XMPP users. What competitor is Slack disadvantaging by ending XMPP support? The author of the article actually complains that they’re not putting in the effort to extend XMPP, when they could. I feel like this “Embrace, extend…” phrase gets thrown out a lot, because it’s catchy, but I’m not sure it’s really accurate. It seems to me more like “make adoption easy in the beginning, then when enough people have switched over to your platform end support for the easy adoption process because new users just download Slack directly.”. But there’s no catchphrase for that.
Leverage - existing technologies/standards
Lure - users to your implementation, usually with incompatible features
Leave - behind those who didn’t convert, and don’t give those improvements back to the technologies you used to get where you are.
?
I hadn’t watched a SpaceX launch previously. Seeing those two side boosters return to land and touch down together was pretty amazing.
So the first person to Mars get a free Tesla?
Wikipedia says they put it into a Mars transfer orbit with no mention of putting it into orbit around (or just plain into) Mars, so it will be a very large solar orbit.
And it looks like the center core crashed into the drone ship and they’re keeping mum for a better first wave of PR.
I did miss the restored video feed with the smoke clearing and no rocket, as visible in the background of the post-lanuch talk. I was wrong. Technically, it didn’t crash into the droneship.
To the best of my knowledge, SpaceX has given up on having a video link from the droneship survive the approach of the core trying to land. Live video of droneship landings has been previously streamed from a helicopter, but that was still closer to shore than this time. The video feed from the ship itself goes down 100% of the time.
What we see in the last frames is consistent both with a core crash and with a nominal landing, so I am not sure if anyone already knows the fate of the main core for sure…
Yes, thank you.
A few quotes from the statements for the press: http://spacenews.com/spacex-successfully-launches-falcon-heavy/
And now it has been confirmed that the orbit crosses the Mars orbit and then goes almost to the inner part of the asteroid belt.
Actually in this case I wouldn’t be surprised if they didn’t know the exact orbit before the last burn. If second stage has any difference from the Falcon 9 second stage, SpaceX cares about detailed performance data more than about the orbit — so it makes sense to make the maximum possible last burn for the second stage instead of trying to ensure a specific orbit (which usually requires performing slightly below the maximum — just in case).
In a sense, the fact that the launch date got delayed multiple times in small increments means that they couldn’t know the exact orbit relative to Mars. Of course, Mars makes a catchy headline, so that’s how the press releases were worded. Now Elon Musk just says «exceeded Mars orbit».
That makes total sense, thanks for the detailed clarification. I forgot this was supposed to be a “test flight”, not an actual mission to deliver a payload to a specific space-time coordinate!
In a sense, there is a wide range of level of significance of the orbit for realistic space missions. We see Falcon Heavy test flight, where you want the things to sound nice and in reality you are collecting the data about the vehicle, not about anything in space. There are missions towards some planet where getting to the planet is what counts. There are solar measurements, where the probe needs to be close to Sun — at some point in time, from some side, but the distance and velocity matter most… but then these are done by gravitational slingshots, and that means that the trajectory must be synchronised very well with the orbital motion of multiple planets, and your launch window is quite tight and doesn’t repeat often.
I used wmii for over 10 years because of the tagging. I haven’t found a “fork” or “clone” that has all the same elements of tiling, tagging, multi-tagging, and dynamically named tags.
I recently gave up (as wmii is long since unmaintained) and got i3 as close as I could to my workflow but without multi-tagging. Maybe there is one that does all of this that I’ve missed. Or I can sort of cobble together my own window management with some scripts and utilities.
Transitioned to i3 from wmii myself, and I miss the multi-tagging too. Otherwise i3 is a solid piece of work. I wonder how difficult it would be to hack in multi-tag support in i3. Have you looked into i3’s code base?
What is multi-tagging? Assignment of multiple tags to a window? Isn’t this a part of definition of tagging in the first place?
(I ask because at some point I have implemented tagging for StumpWM, which of course supports assigning many tags to a single window; then ended up using only one tag for each window in practice — because making groups per-screen-subarea turned out to be more convenient than multi-tagging and global groups)
I haven’t looked into that. I switched recently. If I had the programming skills, though, I would have cleaned up and maintained wmii.
I’ll agree there, I want my phone to have a 3.5mm jack. I can’t image how putting the DAC on the cheap end of the equation (the earplugs) can improve quality over a simple and sturdy analog cable with a magnet on one end.
Or Google… I imagine it must be hard at a third party Android device manufacturer to avoid the temptation of following the lead of the two big players.
Google’s move with the Pixel was particularly shit because they made fun of Apple for getting rid of the jack, then got rid of it themselves.
I thought you were going to say something about search … I miss Yahoo/Lycos/Hotbot/Dogpile and getting different results that lead to different places. Fuck the search monoculture.
Glad someone looked at this from the general Strava user perspective. I’m afraid that because the stories focused on the Military, average people would miss yet another warning about sharing too much information and how what you think is anonymous useless data, isn’t.
Then again, this still isn’t going to change anyone’s behavior.
We tried it at $WORK, this was 3 years ago or so. Super hassle to stand up. You couldn’t upgrade it, you had to destroy your deployment and redeploy. We bought a product that wrapped around OpenStack to help manage configurations and upgrades (I forget the name, they were bought by Cisco) until they went away.
I tried using it just for S3 compatible storage. Got it set up and working, shut it down, and I wouldn’t start back up again. I gave up at that point.
Do people have any guidelines on dealing with nested tmuxes? I often find that I use a tmux locally and one when I ssh. The trouble is keybindings on remote servers aren’t always detected. For example, it’s hard to make says S-left arrow work to move between terminals on the remote end.
Perhaps similar to this: https://marc.info/?l=openbsd-misc&m=149476496718738&w=2
I find that when I want to interact with the inner tmux, I have to ^B, count to 1, ^B whatever. If you’re too fast, the outer tmux slurps it up.
Fixed! @djsumdog, thanks for driving the issue.
https://marc.info/?l=openbsd-ports-cvs&m=153513820421006&w=2