[Comment removed by author]
Aside from being in the VPN space, how are they a shady business?
That’s like saying “Besides the fact he’s a horrible criminal, he’s really a good person.” — I’d argue the entire industry is a cesspool.
I was never a fan of their ircd-seven or many other things going on there. This might make me look into https://www.oftc.net seriously again, and other alternatives that exist.
While I use the freenode network, I really don’t feel comfortable supporting it further if it’s connected with anyone in the VPN business, an entire industry which survives on false promises (aka lies) — and enabling piracy.
What do people think about Apache’s .htaccess, and the fact that nginx doesn’t have it? I know that converters exist; nginx itself provides a writeup explaining why you shouldn’t want it; I have my own ideas but I’m just curious what people here have to say about it.
It’s a security hazard and an IOPS slaughterfest
I understand why they exist but I don’t like them. We can do better than having to read and parse a file on each request.
So eventually, after we platformize our hack and are comfortable from having run parallel infrastructures for some time, we’ll be handing off our DNS infra to the folks that probably know how to do it better than us.
So far I’m 2 for 2 on “companies I’ve heard of running their own complicated DNS set up, despite it not being a core part of their business” vs “companies who would have been far better off outsourcing their DNS.”
What does this look like when you’re in your own datacenter?
In both cases you get an API and UI for managing your publicly visible DNS entries because every worthwhile DNS provider does that.
You can still outsource DNS in that scenario. Maybe it makes less sense though, but it’s equally possible as when you’re entirely cloud hosted.
I think that an ideal state is “companies should only do the core part of their business”, but the reality is that “companies have to own whatever they need to own to ensure their customers can access their product”.
If that means running your own code or your own DNS or your own fileserver then that’s what you gotta do. It’s obviously more expensive but some companies don’t have the luxury of saying (as I’ve heard many on hn say) “amazon is down lol that means the internet’s broken guess we can go to lunch until it’s working again”.
This can’t possibly be true if “own” means “run themselves”. Every company that sells products using the internet needs, amongst many other things, DNS service. Proportionally very few of those companies are capable of running a DNS service with higher uptime than, say, Route 53.
I missed the part of the blog post where they announced patches for all affected devices, and/or explained that this is unfixable?
Is this what the iOS 10.3.1 patch was about yesterday?
I wonder if Chromium devs are taking all these bugs as indicators that the current state of coding “secure” extensions is pitiful, I’d argue this is partially because of the lack of documentation on how to do things right (like Tavis' explanation on how you have to declare the trusted variable…Who would have known that?).
Dude makes HackForums accounts, writes a RAT (Trojan) claimed to be for “budget conscious school administrators” and the like, sells it for $25 a pop, gets arrested and charged with creating and selling a hacking tool.
Seems reasonable enough. I’m not a fan of the FBI but I don’t see this guy’s defense standing up. He’s using the tired excuse of “it’s for Windows administrators, I had no idea hackers would use this for bad stuff!” while he exclusively sold it on HackForums. Get real.
This seems like a really dangerous argument. The vast majority of security research tools are created with the certainty that they will be used for illegal purposes as well as (and possibly more than) for legal ones.
Should the creators of metasploit or the aircrack suite or Kali Linux be sent to prison?
I think there’s an educational aspect to metasploit. I’m having a harder time figuring what I’d learn from a tool that remote enables webcams without activating the recording light.
It’s a gray area, but I don’t think it’s impossible to discern whether one is rgb(1, 1, 1) or rgb(254, 254, 254) flavored gray. And maybe the FBI has their gamma adjusted a little differently, but not that differently.
Despite daily beast’s best efforts at sugar coating it, I didn’t feel all that sympathetic.
I feel incredibly sympathetic. I don’t actually want it to be illegal at all to sell a tool that can be used for malicious purposes, and this attitude of “the guy was probably not really on the up-and-up even if he wasn’t actually hacking anyone himself” is incredibly dangerous to everyone’s freedom. I hope the FBI loses hard in court.
I don’t think it is illegal to sell a tool that can be used for malicious purposes. What’s illegal is knowingly transmitting such a tool to someone who is using it for malicious purposes. The intent is what’s important, not the particular features of the applications.
(This is 18 USC 1030 (a)(5)(A)).
If you’re a gun seller, and someone comes into your shop and says, “I’m looking for a weapon to use to murder someone. Do you have any good murder-guns?” you can’t sell him a murder-gun without committing a crime, even given the ridiculous argument that guns have non-murder uses.
I didn’t feel sympathetic either, but feelings are not objective, and there is some life on the line here. I believe this tool is sort of like the knives that we give to children, who get immense enjoyment from them. I liked knives as a small child, and I liked learning about security later, and because of indulging myself in knowledge about tools like that, I became far more capable as an engineer who actually makes useful things for society. I think the world would be a better place if more people played with these sorts of malicious tools. Our systems and rules would benefit, in my opinion.
Well, like I said, I don’t think this is a great tool for learning. I thought about this a bit more, and there’s some comparisons one can make to the last pass vuln taviso just disclosed. Certainly there’s enough detail in his report to allow someone with malicious intent to do something naughty. But what didn’t tavis do? He didn’t make a weaponized Wordpress plugin that harvests passwords. He didn’t sell it for $25. He didn’t impose a license key to limit who could learn from his work. There’s more security information in more places than ever before. The education of future generations will be just fine.
Oh, thanks for mentioning firesheep. I think that’s a good case. But the author of Fire sheep never got a visit from the FBI? So worth considering what’s different.
They didn’t try to sell it, and they credibly didn’t know what the people downloading it were doing with it (at least in part because they weren’t selling it). Also, I’m not aware of any crimes linked to use of Firesheep.
Selling it, especially on a forum called “HackForums”, feels qualitatively different from producing it in the first place.
If this were advertised on Hacker News, most people would feel the same way. Do you know anything about HackForums? It’s important to go beyond how things feel, especially when life-destroying consequences are at hand.
I’m not sure that’s germane here. The “hack” in “HackForums” does in fact refer to the kind of “hacking” “most people” think of when they hear the word.
It’s obvious to us what “hack” in “HackForums” stands for, but even if it was “cannibalrecipes.com” it doesn’t mean it’s malicious or harmful act to sell cookbooks there. This is not the same as giving a gun to a criminal. While the tool he sold was prefered by some malicious users, the tool’s existence didn’t impact availability of interchangeable ones.
Of course not, but nobody is saying they would be.
Metasploit wasn’t sold at all, let alone on “HackForums”.
The article seems to indicate that he’d been active on that forum as a young kid.
If you wrote a new tool, wouldn’t you tell your online friends about it?
It looks bad, but his proactive efforts to prohibit illegal use should be sufficient to demonstrate that criminal use was not the only use and not his sole intent.
He was more than simply active on HackForums. He was part of a secret Skype group of people who met on HackForums that included Zachary Shames, who was selling keyloggers to people specifically for them to use to own up machines, and he sold services to Shames.
Probably at least one layer away from somebody explaining their criminal plot to you.
Is HackForums suspicious? I was under the impression it was mostly a place where curious security enthusiasts traded tutorials, rather than a carder market or something like that.
People can visit the site and come to their own conclusions, but that’s not the impression I got browsing around the hackforums site for a few minutes.
If it’s not suspicious, sketchy, and borderline illegal, then I don’t know what would be. “We’re just learning and having fun,” is not at all the message I got.
I use LaTeX every day and it is fantastic but I worry that if I didn’t have people at my job supporting the LaTeX build environment I would never be able to use it because of the confusing toolchain required. As it stands I make my edits to .tex files in vim and run make to get a PDF.
This article confirms my fears that LaTeX requires tribal knowledge to get started with.
The condemned title doesn’t make sense to me, but I like where you’re going with this.
I’d mention that you should be able to hold on to PIDs of all first order child processes spawned from your shell script. Additionally, if you kill your shell script and none of the child processes were started with nohup, wouldn’t they die as well?
The condemned title is a self deprecating joke made from Henry Spencer’s quote:
Those who do not understand Unix are condemned to reinvent it, poorly.
as I am not entirely confident that my proposition makes any sense.
Problem is that most of the time if your script is complicated it will not be that easy. Processes can:
And it does not have to be a shell script. AFAIK it’s just a shell convention that shell sends SIGHUP to children spawned by it. Also your process can have children that it did not spawn - fork multiple children and exec in the parent. And going through /proc is racy.
Yes you raise good points. Does any *nix track the lineage of a process? Can I trace every process back to PID 1 and know every mid point along the way?
That won’t make sense as Unix can reuse PIDs.
2nd comment on the post, amazing
How would 2fa have prevented any of this?
Don’t miss that commenter’s awesome circa 1995 website and theories on global warming.
Perhaps once a post is >1 week old anyone who comments on it gets shown a ‘bump’ link (next to suggest, flag, hide) that knocks it up to somewhere on the front page.
I kinda like this idea of “bumping”, it could also solve the problem of reposts as well: if I find that a webpage I like was already submitted a year ago, instead of re-submitting it, I just click “bump”.
I’m a big proponent and supporter of iproute2 but even I can’t help to feel like this is a huge step backwards for usability.
If you looked at this (and every other) cheatsheet without column labels would you be more likely to think the left column (with single commands named appropriately) or the right column (many commands with hard to remember switches and options that aren’t intuitive like ss and ip neighbor) was the correct or new way? If I didn’t know about the two sets of utilities I’d bet ifconfig and iwconfig and netstat and route are supposed to replace that antiquated cryptic ip thing.
Computer sytanx is weird these days. There’s this, and systemd, and PowerShell. Then again we’ve always had openssl and gpg and those are dumpster fires of the cli – and I use them on a daily basis.
Flag: because unvarnished politics.
I just posted the actual code behind this story.
This article presents the output of a really cool technique I haven’t been exposed to before, and discusses the pitfalls of this analysis as well as what was done to correct for that. If I only saw the code I wouldn’t understand how this particular ML worked.
the judge was dismissive. “That’s a policy argument,” he stated. “The fact that the thing you’re doing has good effect doesn’t mean that the thing is lawful.”
Wow, I really love this line. Good job judge for taking a hard look at this.
TL;DR: I know nothing about technology and here’s how I got my domain name back, with little real details.
I agree that this was better than most write-ups by laypeople on such situations. Further, it details the experience one is likely to go through. It also makes it clear to other laypeople you can’t trust the hosting sites to help you protect your domains. I bet many would’ve assumed otherwise.
Don’t forget that she ignored more than one sign that something was off, especially the notification from Google about a new login.
I think the writeup is pretty good, and I also wonder if any writeup that is a better postmortem from our point of view would be harder to relate to (for the people with non-technical skill sets). I don’t know whether the original post as it is will make any people pay more attention to suspicious situations, though.
I don’t understand why you felt this comment was warranted.
Don’t forget the “security advice” at the end.
I think it illustrates why it’s a bad idea to share an account between multiple people, even if they’re your significant other.
Not just «even if they’re your significant other» — even if you trust them not to do anything wrong and even if they do not betray that trust.
So not too different from most Medium posts on technology.
Admittedly they’re comparatively rare, but I have seen some pretty in depth technical write-ups on Medium.
This person is the author of competing software and is attacking LastPass on a two shaky bases:
He reported the idea of a vuln to them 6 months before Taviso and they told him it’s not applicable because he didn’t provide an exploit. He complained about this and goes on to basically say “yeah but when Tavis did it they care! Waaah”. Think about it from a security triage perspective. You’re running a bug bounty and get a load of reports every day. Nearly all of these are bunk. You get one that says “there’s something here I know it!” and provides no evidence. You close it as non-exploitable and move on. If this guy cared he would have put in more effort instead of whining.
He trivializes LastPass' prompt responses and fixes as PR stunts instead of what they are: a company taking product security seriously and deploying effective mitigation right away. He complains that they don’t fix every case, just what was exploited/reported. Well, duh. Has this guy actually dealt with bug reports or does he code at the bottom of a well with a “do not disturb the grue” sign posted at the top? The first step is mitigating the immediate issue, then once the damage is contained you consider going back and reevaluating the architecture such that similar things can’t happen again. And for those of us working on real products, how often do you get the time to do such a thing and implement larger changes which may introduce new bugs?
I do agree with the ongoing critique that they left an old bug exploitable, but they fixed it promptly when it was reported again with a new exploit.
The fact is that LastPass has the best browser and Android experience. It’s why they keep getting customers. Unfortunately, by playing inside a browser you open yourself up to so much more attack surface than if you stayed as a native application. As a LastPass user I accept that risk when I install the extension.
I agree that it’s not his responsibility to audit LastPass but he took it upon himself to do so, filed a crappy report, then whined about it getting ignored later. That’s not how it works. If he wants to be taken seriously he needs to put in effort instead of half-assing it. Either do it or don’t, he chose to do it.
I would like to point out a sort of… “meta” problem:
If you have to dedicate a huge chunk of your constructive criticism / critique to curbing people who would somehow find said work offensive, or take it personally - your biggest issue, to me, is the community!
The problem is that the Linux using community is huge and fragmented into groups of people with varying motives and levels of social skills. You’ve got everyone from reasonable devs who use Linux because they understand it and it’s limitations, to staunch and angry supporters who yell that Linux is the One True OS. Because Linux is opt-in (unlike Windows and Mac) the people who run it will be more likely to care about minutia and articles like the OP, so all sorts of people will end up reading the article and reacting to it unless the author can firewall off some of those who will derail the discussion.
So then the question for me becomes: Is this issue something inherent to large, fragmented social groups? or is it something entirely unique to the Linux community?
I think it’s present in any sort of large group but more so in the Linux community because running Linux is sort of reserved for nerds (like me) and we have a problem with social skills and emotional intelligence.
I switched away from Linux a couple of months ago (happy with FreeBSD now).
However I do maintain a number of Linux systems for people I know. I didn’t force any of those people to use Linux, but some of them just never used a computer before. It actually used to be extremely low maintenance. Things just were working. They were browsing the web on their netbooks, playing a tiny game or did other things.
Then pretty sudden the adoption three big changes on Linux caused a lot of problems across the distributions:
On a couple of issues that the systems ran into I thought “well, it’s early stage. It will make things easier, eventually”.
So I kept updating. And a lot of those updates were like “things get fixed, but that now needs some kind of workaround”. I had trust it would get better, especially because a lot of distributions adopted stuff, so lots of people working on it.
I started to make some jokes about things becoming worse every once in a while, but I wasn’t serious. Those were rather big changes, so it would sure settle.
NetworkManager was easy. You could just turn it off and set it up using dhclient, dhcpcd or even static, whatever fits best. If you needed more you just scripted or used an alternative. But I tried to stick close to to what the distro goes for cause those were machines for computer novices. People that didn’t get tabs, etc.
Yes, I also offered Windows and so on, but those were even more complex for them (they mostly wanted to browse the web and virus and firewall warnings just can’t be dealt with). Also I couldn’t really help them on Windows.
So that has systemd and pulseaudio remaining. Here systemd was the smaller issue. While distros and I ran into some bugs, those were actually fixed. And at that time systemd was more “just an init system” with nicer/different syntax. There was some journald stuff, because I had some scripts parsing log files that I changed to the journald export format. And there what mostly bugs me is that they still have outright lies about their compatibility promises. They claim to not have changed the format starting from a certain version. That claim is simply wrong. They did change a couple of things since them and older versions of redhat distributions show that, but whatever.
At some point it seemed like most stuff was fixed, which was great. Faith restored. They still had frequent updated, but everything was working and I didn’t care so much.
Until they started adding all those pseudo-optional stuff that interfered and broke stuff again. It was quite a lot in the last year.
Pulseaudio was a similar story. There was so much breakage. However, nothing had a really hard requirement on PulseAudio back then. Skype was the first one I remember. So I was able to pretty much disable it. The next evolution was making script that before an application starts would simply kill pulseaudio. That fixed pretty much every problem. The people were fine with that, cause they barely used multiple applications at the same time, so nothing would suddenly stop working. Either they surfed or they played video games. Killing each other’s audio wasn’t a problem and so that hack was kind of okay. Also the bugs were to be solved soon anyway.
Just know: On all of those there were bug reports. I don’t talk about misconfigurations here. I had misconfigurations, but those were fixed. Sometimes the documentation did not exist or was outdated (yes, the official one) or similar, but eventually those things were fixed.
But then there were a number of kernel and pulseaudio updates that completely ruined pretty much everything again. This was a major annoyance. Especially because the resource usage of pulseaudio kept growing and audio got worse and worse. I had apps crashing with PA default configurations, etc. That was the time when I decided I’d switch away from Linux. I wanna get things one and not deal with pulseaudio all day.
First I looked at distros that didn’t have Pulseaudio and the other tools, but then I found FreeBSD which I used ages ago to have the most pleasant experience and I was able to simply disable pulseaudio from the ports and be done.
Later I learned that they also have sndio from some of the OpenBSD folks and that it evidently also works on Pulseaudio. To make it short: Sndio is effectively Pulseaudio for Linux and the BSDs, just working. Completely hassle free, no big resource requirements. All good. So if you are on Linux and hate Pulseaudio, you might wanna give that a try. I don’t know if it’s official or just patched, but recent Firefox also works with it.
Anyways, this is why those three products seem to be the “Main Linux problems on the desktop” in 2017.
systemd actually seems to become more sane. Well, maybe not sane, but it just stops breaking stuff constantly anymore. I think this is because there is sane and experienced people sending in patches these days. I still think OpenRC, used by Alpine, Gentoo and others is the way to go, but honestly, I don’t think one should really care about the init system so long as it is working. systemd seems to have gotten into a state where it is working most of the time.
NetworkManager.. I don’t get why, but that one seems to constantly mess up networking. I don’t see why and while I don’t care enough, cause it’s the thing to replace the easiest to replace and have things just work, I every now and then see others which rely on it struggle with it. And it’s just really odd to me. I really don’t wanna blame the devs here though. This is the software I know least about. I just got the habit into dumping it and live without it, instead having a setup the works perfectly for me. I really don’t think bad of it. It doesn’t fit me, but maybe it suits other people.
It still seems that people run into these “new” issues a lot. None of these issues existed when I first installed those systems. All I did was following the updates. With those three pieces of software. So I’d say those are the biggest problems right now.
I think those are possible solutions:
I really can’t see the game problem. In just don’t see them. Nearly all the games I want exist natively on Linux. Pay Day, Life is Stange, Rust, Witcher 2 (bigger games according to Steam), War Thunder (Free to Play), pretty much all the indie games work perfectly and natively.
All the older windows games work perfectly on Wine and a big part of newer Windows games do too. I never had wine run badly in the last five or so years. Oh no, I did. But that is when PulseAudio decided to occupy a full core. Switching it over to ALSA made it work perfectly. I even use stuff like pipelight to have recent Flash in my browser.
In fact I was most surprised that a tool that modifies the Window of a game, the title bar and so on by modifying memory worked on Windows (if you enable the windows title bar of course).
Wine is probably one of the best pieces of software, because here I get more windows applications working than on a recent version of Windows.
About the big desktop environments: I agree. The only bigger desktop environment that appears to work really well is XFCE. However, I only tried KDE, Gnome and Unity for comparison.
Also I think it’s very hard to measure security. Simply counting all the vulnerabilities detected by researcher seems to be a really bad approach.
I also think that the main reason for many infected systems by my guess is that there is many projects done without a system administrator/devop/platform engineer/system engineer/… or if so someone who has that title, but essentially without experience or understanding of the system with the technologies in use. This happens way less in the Windows world, indeed. Also libraries often don’t get updated, even when vulnerabilities are known.
But those are all just guesses based on what I see in the real world. So I hope that it’s clear, that the views are subjective.
Also mostly agree with the article, otherwise.
I do maintain a number of Linux systems for people I know
How does that mix with the following?
I had some scripts parsing log files
I put my wife on Ubuntu LTS and had a very low maintenance system. She made the switch to systemd without noticing, because the system was never modified at that level. The only reason for sudo was updates. There certainly were no scripts parsing log files or anything.
Honestly I wish I understood NetworkManager and why it exists. It never cooperates which is very much un-Linux.
Here’s a question: does the CIA hack smart TVs over the internet, or does somebody go into the target house and update the firmware with a USB stick? Does this affect how safe we all are?
Regardless of what the CIA does, we know that smart TV manufacturers are selling your information to advertisers out-of-the-box (or if you have an older model, with a firmware update).
But that’s neither here or there? How does that release to the CIA making people unsafe or yesterday’s release? You could have said exactly the same a year ago.
I guess my concern is that “someone” is tuning into my home, regardless of who they are. The CIA leak just brings more attention to that issue.
I guess it seems the CIA leak is an all purpose hobby horse which can be used to support any argument if you don’t look too closely at what was actually leaked.
Re: Your original question, it doesn’t look as though TVs can be infected from the internet. Weeping Angel might have been developed as a hackathon project or as a proof of concept.
From the looks of it, that hack was merely a thing someone developed and proposed for use. It doesn’t appear to be weaponized or non-attribution level stealthy.
The TV in question isn’t known to be remotely exploitable. It’s quite hard to hack a TV in someone’s house, not the least due to NAT. Unless the TV opens a port via UPnP, which I think was not the case here.
TL;DR: They weren’t hacking TVs and if they wanted to they’d need to have physical access and plug in a USB drive to the device.
For some reason, the CIA are less-than-forthcoming about their abilities.
I’d be surprised if they couldn’t hack a TV via the net, but I doubt they’d risk losing that capability (by a target intercepting the malware) when they could send someone in person.
Git repositoriea are portable and ensuring you write a decent commit message encompassing the “why” of the changes means your repo will always have a solid history and a solid foundation. It’s so lazy to bash the keyboard instead of writing a sentence about why you’re making a change. Also it takes longer to find some clever Unicode characters (unless you’re using the MBP Touchbar I guess…thanks Obama).
He and the W3C only has as much power as we assign to them. If we convince companies like Mozilla, Google, and Microsoft (hey, crazier things have happened) to fight against this standard and the W3C, then we can challenge the legitimacy of a decision users seem to wholesale disagree with.
If we convince companies like Mozilla, Google, and Microsoft
Who’s “we”? I think in Chrome you can’t even disable EME anymore to “protest”. The only thing I can do is never buy anything with DRM, and so far I managed to avoid doing that. I only started buying DVDs after their DRM was broken. For the web: if youtube-dl doesn’t work I don’t watch it.
…users seem to wholesale disagree…
Filter bubble: I don’t think users wholesale disagree.They just want to watch Netflix and HBO on their iPad and couldn’t care less about DRM if it allows for them more or less to do what they want. They don’t care if watching video doesn’t work on some nerd’s BSD machine or that libraries will have trouble archiving the content for future generations.
So you’re saying Lee and the W3C only have the power assigned to them by enormous multinational corporations owning the browsers we don’t even pay money for?
No, I’m saying they’re a consortium browser vendors choose to follow. Microsoft has a long history of ignoring web standards, for example.
If “we” do this then aren’t we setting a very dangerous principle for these huge companies to, once again, go forward with nonstandard implementations of everything?
I’m not saying I’m for DRM, I’m just saying I’m worried about giving Google and Microsoft the a-ok to just ignore the standard when “we” don’t like it.