I frickin’ love it. Already signed up.
Even if there is no discussion, having one place for all papers (tagged) is a valuable resource.
Exactly. But in the long term I would love to see detailed discussion about the bits and pieces of a paper.
I don’t know if I realistically can, or want to use Haiku, but it’s great to see some non-UNIX designs still being alive.
It’s surprisingly usable. It’s lacking features that I have to have for work (multi-monitor support, a browser capable of using Google Hangouts, the ability to run VMs at near-native speed, and reliable disk encryption) but if it had those four things it could be my daily driver.
(Google is switching to “Meet” now; I should check WebPositive to see how it handles it, and I know that once upon a time there was an encrypted block device driver in the tree…in my Copious Free Time I should try to help on that maybe…)
For alternative OSs, the best thing to do is implement VNC. Through VNC you can access OSs that specialize in accessing the web, such as Windows, Linux, and … Chrom/iumOS. ChromiumOS is probably the best since it’s sole purpose is to interact with the omnipresent OS that is the web.
Eventually I will figure out some easy setup to do this.
The encrypted block device support still exists as a third-party package (though it’s maintained by one of the core kernel developers); it just does not support having the boot device be block-encrypted.
Well … Haiku isn’t really a non-UNIX design. We have some pretty anti-UNIX tendencies to be sure, but we also use POSIX filemodes, POSIX process model, pretty good POSIX API compliance, BSD sockets, … etc.
Why emulator? It’s better to make an extensible, open hardware interactive toy platform.
I wonder, did they really have to use raw assembly for that. That kind of codebase must be a real headache to maintain.
In my opinion an emulator would be a cool thing to tinker with and I am sure I’m not alone in thinking that.
The processor in the Furby is apparently a Sunplus SPC81A, its datasheet describes it as a 6502 instruction set although it only has 69 instructions as opposed to the 151 in the 6502 so it’s likely to be missing some registers. It comes with 80K-byte ROM shared by audio samples as well as programme and 128-bytes of working RAM. While it can run at up to 6Mhz at (3.6V-5.5V) circuit diagrams for the Furby show a 3.58MHz crystal in use that runs at 3V.
So why raw assembly rather than some higher language compiled into ASM? The answer should become clear from the above. With just 80K-byte of storage, a lot of which is likely consumed by audio samples - the only way to make a programme fit in the space available let alone be performant on such limited hardware is to program it in ASM.
An open hardware, interactive toy platform would be cool, I have seen some projects were people have hacked a Raspberry Pi Zero or other such small micro controller into the guts of the Furby to give new life to an old toy.
:)
The original 6502 only had 50ish instructions documented, iirc, with some undocumented. The later iterations added some instructions (the 65C02 had a bunch more).
Also, the 6502 only had three registers that were part of instructions: the accumulator, X, and Y. It also had a stack pointer and program counter which were indirectly accessible. Again, some of the later iterations added more (I think the NES-specific 6502 added a dozen or more registers).
In the source code, you see statements like:
Bank EQU 07H
… associated with comments like “BANK SELECTION REGISTER”. Those aren’t actual registers. Most instructions used two-byte memory addresses, a full 64k. The addresses from 0000 to 00FF, however, were the “zero page”. The 6502 had a special addressing mode which allowed one-byte addresses into the zero page, which many programs used as registers.
The same instructions and addresses were used to access both RAM and ROM, so if that total was larger than 64k, then you needed to use hardware switch instructions to swap memory banks. Again, though, later iterations expanded that (though only the wild later iterations, like the 16- and 32-bit versions of the processor).
Thanks for sharing, I am fascinated by old 8-bit processors and processor design in general. I was sure the 6502 has 151 but maybe I misread it somewhere and am talking out my ass. The Sunplus SPC81A only has the X register.
Looking at the data sheet I can see that it says:
To access ROM, users should program the BANK SELECT Register, choose bank, and access address to fetch data.
Given its described as having a 6502 instruction set and not actually being a 6502 i’m guessing that’s in addition to what ever opcodes they chose to implement.
“Popular home video game consoles and computers, such as the Atari 2600, Atari 8-bit family, Apple II, Nintendo Entertainment System, Commodore 64, Atari Lynx, and others, used the 6502 or variations of the basic design. “
People might want to play games or try to code within a similar setup. There’s already a large number of people doing stuff like that.
In Canada most home routers (well, from bell at least, which is one of two dominant ISPs) come with a long randomly generated wifi password stamped on them.
Specifically 8 characters long. And for no apparent reason it is limited to hex ([0-9A-F]{8}). Creating about 4 billion passwords. It takes about a day on my gtx970m to try every single one against a captured handshake.
The defaults ESSID’s (wifi network names) are of the form BELL###. So there are a thousand extremely common ESSID’s. Apparently WPA only salts the password with the ESSID before hashing it and publicly broadcasting it as part of the handshake. In a few years of computation time on a decent laptop (so far less if I rented some modern gpus from google…) I could make rainbow tables for every one of those IDs that included every possible default password.
On the bright side it looks like this new method extracts a hash that includes the mac addresses acting as a unique salt, so at least the rainbow table method will still require capturing a handshake.
I never had this realization. Now my head has exploded.
What tool do you use to try these combinations? And is it heavily parallelized? To me 4 billion should not take a whole day…
I experimented with pyrit (24h runtime, builds some form of rainbow table, wrote a short program to pipe it all the passwords) and hashcat (20h runtime, no support for rainbow tables, supports generating the password combinations by itself via command line flags). They are both heavily parallelized, 100% utilization of my GPU.
My GPU is a relatively old GPU in a laptop with shitty cooling, which may contribute to the runtime.
Running on a CPU it said it would take the better part of a month.
Interesting. While waiting for a reply, I thought to myself: I wonder how much it would cost to run it on Google Compute with the best hardware. Could be worth it to those who want wifi for a week or longer without paying anything. Spooky.
In Luxembourg every (Fritz)box comes with a password written only on the notice (not on the box itself) that is 20 (5*4chars) in hexa. It’s a pain to type at first, but well, it’s seem like a good one.
Quality and cost-effectiveness are significant issues for companies developing their own hardware. The GameBoy meets both of those compared to the alternatives.
Reminds me of this patient https://patents.google.com/patent/US5876351 relating to the use of the GameBoy in ECGs (something I believe was actually applied in the German market). There is even a presentation citing ECG software with custom hardware here https://webpages.uncc.edu/~jmconrad/ECGR6185-2013-01/Presentations/Chitale_Paper_Presentation.pdf
I often see people disregard these sort of things because it was sold as an entertainment system for children but the belts and braces of it is that the GameBoy was/is a decent ARM based machine with great battery life and built like a god damned tank!
I was just at the Louvre and they use a Nintendo 3DS for their entertainment guide. Cheap to replace, easy to program for, durable since it’s designed for kids to drop, built-in wifi and update mechanisms; probably the best choice you could make in proprietary hardware.
Actually, in 2011, Nintendo and Satoru Iwata (RIP) were really into 3DS and interactive museum. Nintendo gifted 5000 to the Louvre and helped develop the software for its use as a audio guide. As a volunteer in a computer and video game museum, that sounds like the perfect solution for us. However, I am not sure we can convince Nintendo to be that generous to us.
I figured it had to be some kind of partnership but didn’t realize it was that intense. Maybe not Nintendo, but I’m sure you could find homebrew game developers who could set you up with a system (and used Nintendo DSes to hopefully keep the spend down) that would work for your museum.
Game Boy and Game Boy Colour were not ARM-based, they used a Z80 clone with some operations removed built by Sharp. The Game Boy Advance was the first ARM-based Nintendo handheld, based on the ARM7TDMI, and included a full Z80 chip for Game Boy backwards compatibility.
gbcpu is not “a Z80 clone with some operations removed”. This is a mistake that has propagated forever.
As far as we know, we (#gbdev) think it was based on some “core” Sharp has for many custom jobs. The actual chip name is LR35902 and should always be referenced as this, or as “gbcpu” or similar.
included a full Z80 chip
No. They included the gbcpu, actually some gbc revision. There is a good chance it will not play some original Game Boy games (I say “good chance” because I have no references, but I’m pretty sure this is fact).
I will be happy to answer any more questions. I love this little device for the nostalgia factor, its history, simple architecture, cheap price point, and retrocoding.
If people made software today like they did during those times, we would have blazing fast applications and way better battery life. It is sad in a way. Our phones could probably run so much longer.
I’m waiting for the day someone also creates an avr-like handheld.
Sorry for oversimplifying. As far as I understood though, the LR35902 is a Z80 derivative with certain operations missing (and a few others added). If this is wrong, could you point to some documentation of how exactly an LR35902 differs from the Z80/8080?
I recently got an ODROID GO, which is a Gameboy-like handheld with a backlit color LCD and an ESP32, which is a really nice MCU with WiFi, Bluetooth and a bunch of GPIO neatly exposed in a sturdy enclosure.
If you google “The Ultimate Game Boy Talk”, there is a diagram in it that shows exactly what is missing and what is extra :)
It is not a derivative though. It’s simply based on some internal core Sharp re-uses. As I said, this is misinformantion that has been casually spread for a long time now.
Another extremely similar chip from Sharp is SM8521. http://pdf.datasheetcatalog.com/datasheet/Sharp/mXuyzuq.pdf
Not trying to be a smart ass, but ARM7TDMI is (confusingly) an ARMv4 core, ARMv7 is a newer architecture used by ARM Cortex cores.
I will never understand why ARM chose this confusing naming for their cores & architectures, but there it is.
I did not know that. It’s quite interesting to see the internal workings of these devices; especially with what they were able to achieve with them at the time.
One day we will type in a script and the computer will create a movie for us. That day is not today.
I…that day might be sooner than we think! In fact, it could already be possible if we use this as a primitive.
Each description is a frame, or maybe better, a “section” of a scene. Then these are interpolated to create scenes transforming from one to another.
This is really nice, to get some more overview over the fediverse and to find new peoole to follow.
Thank’s a lot!
glad you like it. It feels to me like the good old internet days where you come across quirky communities on the regular.
You know at first I thought “I wish it would show the post’s text”, but when you click through, you discover all the other content that is there. Very cool. I’m actually surprised at the amount of users all over!
You might find this helpful in your journey.
Is there any well known PGP alternative other than this? Based from history, I cannot blindly trust code written by one human being and that is not battle tested.
In any case, props to them for trying to start something. PGP does need to die.
a while ago i found http://minilock.io/ which sounds interesting as pgp alternative. i don’t have used it myself though.
Its primitives and an executable model were also formally verified by Galois using their SAW tool. Quite interesting.
This is mostly a remix, in that the primitives are copied from other software packages. It’s also designed to be run under very boring conditions: running locally on your laptop, encrypting files that you control, in a manual fashion (an attacker can’t submit 2^## plaintexts and observe the results), etc.
Not saying you shouldn’t be ever skeptical about new crypto code, but there is a big difference between this and hobbyist TLS server implementations.
I’m Enchive’s author. You’ve very accurately captured the situation. I didn’t write any of the crypto primitives. Those parts are mature, popular implementations taken from elsewhere. Enchive is mostly about gluing those libraries together with a user interface.
I was (and, to some extent, still am) nervous about Enchive’s message construction. Unlike the primitives, it doesn’t come from an external source, and it was the first time I’ve ever designed something like that. It’s easy to screw up. Having learned a lot since then, if I was designing it today, I’d do it differently.
As you pointed out, Enchive only runs in the most boring circumstances. This allows for a large margin of error. I’ve intentionally oriented Enchive around this boring, offline archive encryption.
I’d love if someone smarter and more knowledgeable than me had written a similar tool — e.g. a cleanly implemented, asymmetric archive encryption tool with passphrase-generated keys. I’d just use that instead. But, since that doesn’t exist (as far as I know), I had to do it myself. Plus I’ve become very dissatisfied with the direction GnuPG has taken, and my confidence in it has dropped.
I did invent the KDF, but it’s nothing more than SHA256 applied over and over on random positions of a large buffer, not really a new primitive.
It always bothers me when I see the update say it needs over 80 megabytes for something doing crypto. Maybe no problems will show up that leak keys or cause a compromise. That’s a lot of binary, though. I wasn’t giving it my main keypair either. So, I still use GPG to encrypt/decrypt text or zip files I send over untrusted mediums. I use Keybase mostly for extra verification of other people and/or its chat feature.
Something based on nacl/libsodium, in a similar vein to signify, would be pretty nice. asignify does apparently use asymmetric encryption via cryptobox, but I believe it is also written/maintained by one person currently.
https://github.com/stealth/opmsg is a possible alternative.
Then there was Tedu’s reop experiment: https://www.tedunangst.com/flak/post/reop
Moving into a new apartment tomorrow, helping a friend move on sunday.
Though I forgot to rent a truck and now everything is booked… I did find something, but it’s going to cost me.
I’m moving over this weekend as well, and I’m from Florida as opposed to Quebec. If you’re going to move, summer is as good a time as any
I’m moving tomorrow too. Spent the past month packing so I’m not stressed for it. Going to feel like 45C in Montreal. At least it will not be on July 1st when the whole world is moving and feels like 48C.
Good luck to you!
I found this funny so I’d like to mention it: boxes are storage of spacetime. You pack things in the present, you have stored that time (saved it) in the future. It also takes up space. Thus, a box is a spacetime storage medium :D
Thank you and good luck to you as well! I’m starting to move at 7h30 so hopefully I can escape part of the heat..
Yeah it’s a bit… Big. Also, why do all these web “book” authors always refuse to have a “next” button for linear navigation through the reading material…
…You know, like a book.
and max-width: 50em (instead of 25) for main for people who use user styles. With 50% font-size and increased width, its pretty nice :)
At work we use YYYY.MM.NN for internal software (NN being a 0-indexed release number for that month).
I like this for knowing when something was last updated, but it’s not helpful for identifying major changes vs. bugfixes. Perhaps that’s not such a big deal for software that’s on a rapid release cycle.
It’s also not a big deal for software that’s too big or complex for “major change” to be meaningful. If a tiny, rarely used Debian package removes support for a command-line flag, that’s a major change (in the SemVer sense) and since Debian includes that package it’s therefore technically a major change in Debian. But if Debian followed SemVer that strictly, its version number would soon leave Chrome and Firefox in the dust, and version numbers would cease being a useful indicator of change.
Isn’t Debian’s solution to this to not include major-version changes in updates to an existing release? So it does wait for the next major version of Debian to be included, usually
Yep, and this is where the -backports or vendor repos are really useful - newer packages built against the stable release.
It’s why we have to make “stable” releases. Otherwise everyone goes crazy. If someone is updating their SemVer too often, they have bad design or do not care for their developers.
There’s a discussion of CalVer and breaking changes here: https://github.com/mahmoud/calver/issues/4
Short version, “public” API is a bit of a pipe dream and there’s no replacement for reading (and writing) the docs :)
The concept of breaking changes in a public API isn’t really related to ‘read the docs’, except when it comes to compiled in/statically linked libraries.
If you have dependency management at the user end (i.e. via apt/dpkg dependencies that are resolved at install time), you can’t just say “well, install a version that will work, having read the docs and understood what changes when”.
You instead say “I require X major version of package Foo”, because no matter what the developer does, Foo version X.. will always be backwards compatible - new features might be added in a X.Y release, but thats not a problem if theyre added in a backwards compatible manner (either theyre new functionality that has to be opted in for, or e.g. they don’t require extra options to work).
Yes, I know that things like Composer and NPM have a concept of a ‘lock’ file to fix the version to a specific one, but that’s not a solution for anything but internal projects. If you’re installing tools that you aren’t directly developing on yourself using NPM or Composer, you’re doing it wrong.
I really don’t see what that has to do with the linked thread. In the very first line, you mention a “public” API. The point is that there’s much less consensus on what constitutes a public API than developers assume. So, you end up having to write/read the docs about what would constitute a “semantic” version change. (Not that docs are a silver bullet, they’re just a necessary part of healthy software development.)
The point is that there’s much less consensus on what constitutes a public API than developers assume.
A comment by you making that same claim on GitHub isn’t really evidence of a lack of consensus. What possible definition is there for “public API” besides, “something that will be consumed my someone outside the project”.
So, you end up having to write/read the docs about what would constitute a “semantic” version change.
The decision tree for SemVer is two questions, with 3 possible outcomes. And you’ve still ignored my point. Adherence to semver means you can automatically update dependencies independently of the developer.
So, for instance, if the developer depended on a shared library, that happens to have a security vulnerability, when the library author/project releases a new patch version, end-users can get the fix, regardless of what the tool/app developer is doing that week.
The automatic updates work until they don’t. Here is a recent example where Python’s pip broke with many assumptions about public APIs. Your point has not been ignored, I’ve written about it extensively, in the linked thread and in the linked site (and links therein).
As for your closing comment, I’m noticing an important assumption I’m working to uproot: current date is not the only possible date in the release version. manylinux2010 came out in 2018, and is named as much because it’s backwards compatible to 2010.
The Teradata example on the CalVer site also highlights maintaining multiple releases, named after their initial release date. At the consumer level, Windows 98 got updates for years after 2000 came out.
That isn’t a failing of semver, it’s a failing of the developers who didn’t properly identify they had a breaking change.
The same thing would have happened under calver, they would have marked it as a patch release with compatibility to the previous version, regardless of the date component.
Expecting people to just forget about the possibility of automatic dependency updates is like suggesting people forget that coffee exists after they’ve had it daily for 10 years.
Very cool - I really like lower-power equipment like this. However, I think it’s a terrible idea for security since it looks like its an unencrypted video stream which would make eavsdropping trivial.
Full details now available at https://efail.de/
So as far as I see, this isn’t necessarily a bug with PGP “itself”, when used for signing git commits or used in combination with pass, but rather when sending encrypted emails. Or am I wrong?
The first exploit, is definitely not PGP’s fault.
Unfortunately because I don’t know S/MIME, I can’t comment. But it seems like there is some inherent problem with the second attack affecting both it and PGP.
CBC and CFB encryption modes use the previous blocks when encrypting new blocks. There are some weaknesses, and of course OpenPGP and S/MIME use them. That seems to be part of the problem. The other part is that stitching together multipart messages is something that email clients have no problem with doing, so shit HTML, can result in a query string that exfiltrates the content of the decrypted parts.
OpenPGP mitigates those weaknesses with authenticated encryption (MDC). So it’s still only a problem if a broken MUA ignores decryption errors from gpg (or if the email in question is using a very old cipher. so, the attack may work if you auto-load remote content on encrypted emails from before 2000)
OpenPGP mitigates those weaknesses with authenticated encryption (MDC). So it’s still only a problem if a broken MUA ignores decryption errors from gpg (or if the email in question is using a very old cipher. so, the attack may work if you auto-load remote content on encrypted emails from before 2000)
I’ve finished the core of my hardware and software Chip-8 emulator, running on an Atmega1284 MCU with an SSD1306 OLED display, buzzer, keypad and SD Card reader. Finally got a launch menu integrated, that allows me to give users options to select emulation quirks using the 4x4 keypad.
Now I just need to go back through the emulator core, double check and test the quirks and I’m ready to implement Superchip support. I’m also going to start looking at laying up the PCB for an initial run. My head says Eagle CAD, as that’s what I’m used to but my heart keeps telling me to use KiCAD. I can fit the emulator in the 1284, but I’m uhmming and ahhing about adding support for a BASIC interpreter, and I really need some external RAM to make the most of that. I’ve been looking at 64k modules, but I might leave that for a later iteration if I can get this to a steady state without the extras.
What’s the battery life?
I’d like to make something very similar to this, but no emulator. Just write fast atmega assembly.
I haven’t had it running on battery yet, but with some clever tickling of timers and the OLED I’m hoping to get about 48-96 hours on 3 x AA batteries.
A single AA is 1.5v, you’re going to need at least 3.3v to run things properly (although I think I can get away with down to 2.7v). 3 AAs should get me 4.5v, which lets me cheap out a little on power management ICs (everything is 3.3v/5v tolerant), although I might use an MCP1702 3.3v LDO regulator.
For this version of the board 3 AAs will be fine, I’ll look at options for using 2x AA or possibly coin batteries with a step-up converter or boost regulator down the line when I’m ready to build a production version. A lot of people forget the original gameboy used 4 AAs.
I’ve heard that one reason microcomputers took off in the West while game consoles took off in Japan is that while you can get a lot of work done in English with a 256×240 display, that resolution isn’t high enough to properly display kanji. Likewise, you can fit two full English alphabets plus assorted punctuation in a few kilobytes of character ROM, but the storage required for a decent set of Japanese characters is huge in comparison. So it wasn’t until the early 90s that PCs became useful to the general Japanese public, by which time cultural trends were already set.
I imagine China had similar limitations, plus the whole “cultural revolution” deal on top.
I don’t think this was the case, unless there were consoles that were released before the Famicom in 1986, By 1986 the character limitation problem had been solved 6 years earlier:
In 1980, the GB2312 Code of Chinese Graphic Character Set for Information Interchange-Primary Set was created allowing for 99% of contemporary characters to be easily expressed.
So…I think this reason is just an incorrect theory. Didn’t Japanese have microcomputers anyway?
Character encodings like GB2312 are just an enumerated list of characters, I’m talking about storage space.
The Commodore 64, to pick a reasonably popular Western microcomputer, displayed 8×8 pixel characters. Wikipedia says it had 164 codes with “visible representations”, so it needed to store 8×8×164 = 10496 bits, or 1312 bytes of character data.
It’s not practical to display Chinese characters in 8×8 pixels, I believe 12×12 is the practical minimum. GB2312 has 6,763 characters, which means a Chinese microcomputer based on the same principles as the C64 would need to store 973,872 bits, or 121,734 bytes, or ~119KiB. Not only is that nearly two orders of magnitude larger (with the corresponding increase in production cost), it’s nearly twice the amount of memory that the C64’s 6510 CPU could address, meaning a hypothetical Chinese C64 would also require a much more expensive CPU. Either way, it wouldn’t have been economically possible for such devices to have been as popular as microcomputers were in the West.
I think you are right. Even though they may be able to be “expressed”, they are probably talking about theoretically and not actually on a computer.
Trying to pull foreign-language information out of Google seems to be a lot more difficult now than it used to be, not sure why. I’ve been trying to find the keyboard pinout and protocol for the Japanese PC88 8-bit microcomputer for a few weeks now. Even with Japanese turned on in the (very poorly designed) “languages” screen, it’s not doing a great job at relevance.
edit: as soon as I complain, there it appears: https://electrelic.com/electrelic/node/597
It makes me think we are in need of a less biased search engine. My main concern would be storage space to archive all these sites. Speed would not really matter as long as we can do it in a year’s worth of time. Actually I guess archive.org is sort of like this?
One of the things you can do is to plug foreign-language technical sources like this and your own findings into archive.org to make sure they are saved. Obviously their crawler can’t be everywhere, so stuff like this (especially in Japan where it seems like the best technical info is available on free ISP webspace) can get lost so easily.
linux on the desktop is 45% stockholm syndrome, 15% wishful thinking, 15% undergrad code shambles, 15% cargo cult microsoft aping, and 10% cynical corporate complexity to sell support contracts.
What’s your preferred alternative? The walled, proprietary gardens of Apple or Microsoft? OpenBSD?
It’s a serious question, Debian Stable as a desktop OS is working reasonably well for me. I wan’t a unix-like system - No Windows - which offers broad choices of hardware - no OS X - and it should be free software - one of them. I’d switch to OpenBSD for most of my work, but I need stuff like docker for work and want resonable gaming support at home. I could switch between different OSes for different tasks, but why bother? Debian truly is “the universal operating system” for me, even with all it’s faults.
FWIW, I use FreeBSD on my Laptop. I know it is not ideal but I choose to do it and work through the pain because I can and because I think it’s good to support options.
I just use i3. TrueOS is pushing for Lumina. Gnome is basically Linux only at this point with all of its systemd coupling, from what I understand.
OpenBSD has good Gnome3 support, see here, although note that the instructions mentioned are out of date, it’s best to follow the readme that is installed when you pkg_add gnome.
“systemd coupling” is mostly logind. It’s only really necessary for starting gnome-shell as a Wayland compositor. Someone should try either reimplementing logind for *BSD (there were such projects but I don’t think anyone got it completely working) or adding support for something like my little loginw thing to gnome-shell :) same for kwin_wayland.
I actually use Weston right now, and going to write my own libweston-based compositor eventually… (loginw was created for that)
For X11, both gnome-3.26 and plasma5 should work.
Do you write your own scripts for stuff like volume/backlight control, locking etc? Having used I3 for over a year, this was the least enjoyable part for me because sometimes stuff would break/change/rename and I’d have to fiddle with my scripts.
Yes I’ve been writing my own scripts. I haven’t had any issues with it. But like I said, I’m explicitly deciding to add some pain in my life to support something I think is bigger, so it’s not for everyone. Lumina, though, is a full DE AFAIK so that should handle the things you’ve brought up.
Huh. Because my Linux desktop is peerlessly stable, bears no resemblance to anything Microsoft has released in the past thirty years, and is community developed and supported. I in fact find that the commercial desktop environments are unstable, unusable buggy garbage, and I’ve had the misfortune to have to use both of them fairly significantly.
Don’t confuse “Linux on the desktop” with “GNOME on the desktop” (or, for that matter, “intentionally using unstable software on the desktop”).
The Cambridge Analytics scandal has prompted me to delete Facebook and be much more aware of my privacy. I know that deleting Facebook is now a “cool” thing to do now, but it’s been a difficult decision. I still had many friends there that I have no other means of contacting. Ads have gotten much scarier recently, perfectly retargeted among services, so I was getting mentally ready for this. But stealing data for political purposes is where I draw the line.
I’ve also replaced google with DuckDuckGo, and am planning on changing my email provider too. But I don’t know if it’s going to be futile. I still shop on amazon and use many other irreplaceable services like google maps.
Again, I’m not a privacy freak. I try to find a middle ground between convenience and privacy, so these changes are hard for me
Any recommendations for a balanced solution?
Whereas I’m about to have to get back on Facebook after being off quite a long time. I’ve simply missed too many opportunities among local friends and family info since they just refuse to get off it. Once it has them in certain numbers, they find it most convenient to post on it. That’s on top of the psychological manipulations Facebook uses to keep them there. I’ll still use alternatives, stay signed out, block JS, etc for everything I can. I will have to use it for some things for best effect.
The most interesting thing about leaving, though, was their schemes get more obvious. They tried to get me back in with fake notifications that had nothing to do with me. They’d look like those that pop up when someone responds to you but you’re not in the thread at all. They started with an attractive, Hispanic woman I’ve never seen from across the country that some friend knew. Gradually expanded to more attractive women on my Facebook but who I haven’t talked to in years or rarely like (not in my feed much). The next wave involved more friends and family I do talk to a lot. Eventually, the notifications were a mix of the exact people I’d be looking at and folks I’ve at least Liked a lot. I originally got nearly 100 notifications in (a week?) or something. Memory straining. Last time I signed in, there was something like 200-300 of them that took forever to skim with only a handful even real messages given folks knew I was avoiding Facebook.
So, that whole process was creepy as hell. Especially watching it go from strangers I guess it thought I’d like to talk to or date to people I’m cool with to close friends. A lure much like the Sirens’ song. Fortunately, it didn’t work. Instead, the services’ grip on my family and social opportunities locally are what might make me get back on. The older forms of leverage just in new medium. (sighs)
It kind of depends on what you are trying to prevent. There are some easy wins through
As of March 2017 US ISPs automatically opt you in to Customer Proprietary Network Information. ISPs can sell this information to 3rd parties.. You can still opt out of these.
Look for CPNI opt out with your ISP.
https://duckduckgo.com/?q=cpni+opt+out&t=ffab&ia=web
uBlock Origin / uMatrix are great for blocking tracking systems.
These do affect sites who make they’re money based on ads however.
Opt out of personalized adverting when possible
Reddit, Twitter, even Google give you an option for this.
Revoke Unneeded Accesses
https://myaccount.google.com/u/1/permissions
https://myaccount.google.com/u/1/device-activity
https://myaccount.google.com/u/1/privacycheckup
https://twitter.com/settings/applications
Make your browser difficult to fingerprint.
EFF has a tool called panopticlick that can show you how common your browser’s fingerprint is. I locked down what I could (there should be instructions on panopticlick’s site), and added an extension that cycles through various common user-agents. It might sound like overkill, its not onerous to do.
Don’t store longterm cookies.
I actually disabled this mostly. I still blocked for 3rd parties, but first party cookies are allowed now. Using a hardware key or password vault makes signing easy, but ironically the part that killed this for me more sites supporting 2FA. I use Cookie AutoDelete for Firefox.
Change your DNS provider.
I don’t have a good suggestion for this one. I use quad-9, but I don’t really know enough to say whether or not I trust them.
Unlike an email or web server, setting up a resolving only DNS server is quite painless. I do this at home and rarely have issues. And if I do, I can reset it at whim instead of trying to fight with tech support.
I pay $40/year for Protonmail. It is fantastic.
As for Facebook, why delete? It is actually a benefit to have an online presence for your identity, but you need to be careful with what about yourself you share. If you don’t take your online identity, someone else will. This is exactly why I’ve registered my name as a domain and kept it for years now. It is just another “string of evidence” that I am who I say I am on the internet.
My FB is just a profile picture now and nothing else. I have set my privacy settings to basically super locked down.
When it comes to socializing, there is little you can do to not be tracked. The only thing you can do is “poison the well” with fake information and keep important communication on secure channels (i.e. encrypted email, encrypted chat applications).
I removed Facebook about 6 years ago and recently switched to Firefox beta and DDG. Gmail has had serious sticking power for me, though. I’ve had several fits and starts of switching email over the years but my Gmail is so intertwined with my identity nothing else has ever stuck.
It is possible to switch, I’m sure, but in my case, I have never committed quite enough to pull it off.
When I got off gmail, it took about two years before I wasn’t getting anything useful forwarded to my new identity.
Setting up forwarding was quite painless and everything went smoothly otherwise. The sooner you start…
When I looked into it, everone was suggesting FastMail if the new service needs longevity and speed. It’s in a Five Eyes country but usually safest to assume they get your stuff anyway if not using high-security software. The E2E services are nice but might not stick around. Ive found availability and message integrity to be more inportant for me than confidentiality.
People can always GPG-encrypt a file with a message if they’re worried about confidentiality. Alternatively, ask me to set up another secure medium. Some do.
You should add Paperkast to the list of sister sites.
Done. Thank you very much for the suggestion.
Maybe there should be some sort of standardized directory?…
How is https://github.com/lobsters/lobsters/wiki not a standardized directory?
I suppose it is, but it was not obvious to discover.
I also wish I’d discovered it sooner. However, other than nesting it under the “Wiki” link at the bottom page, I don’t see a solution that wouldn’t start cluttering up the site with information most people won’t need.
It’s linked from the about page.
Can you expand it a little bit?