build discovery engines instead of search. Search minimizes diversity of result, discovery maximizes diversity under similarity constraints. This is a much more interesting problem, what similarity and diversity constraints are appropriate.
as an example of the limitations of search, searching for a document by typing the whole thing will give you that same document. Discovery engines would likely not be short queries, but entire documents.
Do you have an example of such an engine?
e.g. pubmed recommends related articles.
I haven’t seen anything net-wide, tho.
I would say that, although problematic, the recomendation system for youtube is an example of a Discovery Engine.
Decentralise it. I want a compeletely subjective web. I want to subscribe to people whose views and opinions I like and when I ‘search’ I want a breadth first search of all their content, spending increasingly more search resource on stuff that’s more semantically relevant.
Huh, that’d be interesting. I’d imagine there’d be “major” nodes (Wikipedia could host their own, for example) or…actually, maybe not. Nodes would be as big as the owner’s time and server/database size.
I like this idea.
EDIT: Seems like you’ve described YaCy (I just learned of this too).
Now you have two problems!
This sounds like a friend-to-friend version of yacy. I’d use it!
If the two PCs are connected to separate APs, syncthing might route the traffic via a node on the internet
Fantastic!! A personal website is a great way to own your identity online and take back your web experience from the big social media silos. So I’m really pleased when I see people making websites!
Mine’s pretty minimal, which is how I like it. :-)
It’s a bit of info about me and some of my projects/writing. One thing I’m really happy with is my photography journal: https://dn.ht/journal/
Hosted on S3 with cloudfront, so it only costs a few cents a month to host.
I keep the tooling really light so that it’s easy to maintain. Most pages are hand written HTML and CSS.
I’ve found that it’s really easy to procrastinate on tooling and tech stack. Starting with almost no tech helps me get-on-with-it. My advice is “just start”.
The photo journal is statically generated using an ERB template and a makefile. It generates html from a directory full of photos exported out of Adobe lightroom. It’s a tiny bit awkward sometimes, but it fits in really well with my personal photography workflow.
My design aesthetic is quite minimal. For example, my photo journal I was really annoyed with how images on instagram are small and heavily compressed—so I wanted to have something where people can see my photos in large size / high-res.
I’m using a monospace font called ISO, which was originally designed for the website exif.co so it’s got a vintage camera feel but fits in with my programming interests too.
A lot of beautiful photos in that journal.
That’s what I came to say too. The photos are amazing.
Thank you very much! 😊
I really like your photo journal idea!
It actually looks much better in practice than I’ve imagined as I’ve been struggling with photo journal in my own blog - the biggest challenge being how to structure everything so it would be accessible and beautiful. However it turns out that just dumping everything can be quite sexy!
If you’re interested in my approach I’ve decided to simply redirect to Pixelfed which has a great profile page design. The thumbnails are rather small compared to what you want but it doesn’t crop or compress anything!
Oh yeah pixelfed looks like a great insta alternative. Something I’d be keen to work out is how to allow folks to subscribe to my journal. One of the downsides of being independent is it’s hard to fit in with the social media sites people use. Is rss still the way to do it? Or can I somehow integrate into the mastodon/pixelfed fediverse..?
Pixelfed does support rss and federation. Probably some other sub methods too!
Picklecat is great!
Oh my gosh picklecat!
Don’t forget about https://github.com/muesli/beehive
An open source ifttt clone. I had it running on my raspberry for some time. 👍
Oh wow, this is incredible! The UX is better than IFTTT in some ways too. Thanks for sharing!
If you just want to block third party scripts, isn’t Privacy Badger enough?
Privacy badger only block trackers, not potentially malicious JS that you load from some random blog.
works for me.
I use “HTML” for writing the page, IPFS+Cloudflare for distribution. Pinning the website hash on a raspberry pi.
Edit: Using gandi’s cli to update the dns record.
Payment services will continue to grow and diversify for some time until we end up with a duopoly or similar there as well.
I tried to budget using a variety of tools but then ended up asking myself why do it at all? I personally wanted to keep me spending under control and put money aside for emergencies and payments I make once a year. I can’t tell you how much I spent on groceries in December 2017 but I really don’t care ;-)
I know this isn’t an answer to your question but in case it might work for you please let me know and I’ll share the details of my system.
How do you control your spending?
What do you mean? Specifically knowing how much more you can spend in a given week to have spent less than X? Or dedicating Y amount per month to Z? The latter is simple: transfer money to savings the day you get paid, or toward whatever thing you are budgeting for. If that’s not possible, you’ll need to start cutting costs before budgeting will do anything for you anyway.
I withdraw money in cash and put them in envelopes. I have one envelope for each category of spending + a debit card for groceries (a sort of virtual envelope) + a separate account for paying rent, kindergarten, etc.
If there’s $100 in an envelope than I obviously can’t spend more. I need to consciously take money from another envelope.
That sounds like the system GoodBudget is trying to emulate. 👍
Until recently I have used Goodbudget. It’s really good and a paid service. I don’t know about their privacy rules though.
What i have started doing recently is connecting to my bank’s open PSD2 API and send myself messages every day containing info about my economy to nudge my spending in the right direction.
The e2e encrypted chat has a subscription model. I wish this were the private communication platform it claims to be. The closest we have now is keybase, which is not optimal either
A subscription model itself is not bad. But if they are selling themselves as a private communication network, it’s strange that the only privacy friendly piece is the one that costs extra.
I’m interested in your opinion on the deficiencies in keybase. What features would you change or add?
If I were to introduce keybase as a communication platform for my less tech interested friends and family, it would have look more like what MeWe looks like. A chat/social network/shared photos.
Fully hipster compliant if you ask me :D
C’s a little too mainstream.
So I’m porting my code to this obscure assembler. You’ve probably never heard of it.
Were you doing a Leon3 port, too?
All these heavy-weight, closed, possibly-backdoored CPU’s people are using these days for BCHS stacks. I’d rather just macro-assembler it on a GPL’d CPU I can customize and understand. Open cores also let you do other neat things to improve security later on.
This is a pretty good post.
The author should spend a little more time on the distinctions between IVs and nonces (this is a problem in the literature as well) because the constraints on both are subtly different. An IV is an implied first ciphertext block, and in CBC it needs to be unpredictable. A nonce is a number used just once; it is less important that a nonce be unpredictable, and in fact in some constructions (GCM being a good example) a random nonce can be problematic.
I’d also nitpick that there are probably much more important common developer crypto mistakes that should push out, for instance, not using password hashes or having incoherent crypto designs. For instance:
Directly using RSA to encrypt plaintext (and, relatedly, using RSA without secure padding).
Failing to authenticate associated data (such as the IV of a CBC ciphertext.
Compressing before encrypting.
I might also instead of recommending RSA-2048 and discussing key sizes instead just push people towards Curve25519.
I wish I could just tell everyone to use Curve25519, but unfortunately as long as FIPS is still baring it I don’t think it will get adopted at the rate we all want.
Directly using RSA to encrypt plaintext
Directly using RSA to encrypt plaintext
I’ll be the dumdum here and ask why you should not do this. I see that encrypting a symmetric key for the message using RSA is recommended instead. Why? :)
A few reasons, one you can only encrypt things with RSA up to the size of the key, so if you want to encrypt a large message you just can’t in a single shot with RSA. You might design some sort of multi-RSA-encryption scheme, but then the problem you face is that RSA encryption is significantly slower than a symmetric cipher like AES.
Finally, I’d like to note that in general I think people should be skeptical of designs that involve encrypting anything with long-term RSA keys: https://alexgaynor.net/2017/apr/26/forward-secrecy-is-the-most-important-thing/
Also: the amount you can encrypt per “block” is deceptive, because there’s an amount of padding necessary for security, and encrypting correlated bits under RSA makes error oracle attacks more feasible. There is in practice virtually never a reason to encrypt directly with RSA.
Add 2015 to title?
You can do that by clicking “suggest” and adding it.
I would if I knew how/could. Perhaps a mod can do it?
I currently use fossil for a side-project, but I’ve been keeping an eye on self-hosting git servers, mostly for easier github mirroring. gitly is the one that currently has my interest, as it’s the only git server that seems to have the features close to what fossil and is built to run on servers with less an 1GB of RAM, (as opposed to Gogs or Gitlab).
I wonder what it would take to rectify these things in fossil itself?
Does Gogs (or Gitea) require that much resources?
Not for basic usage, but gogs (at least) has other strange performance issues. For instance, every time the file tree for a repository is loaded, it checks the git history for the last commit to modify that file. On tiny repos, this isn’t too bad. However with a 100-200 file ASP.net website, the file tree (front page for a repository) can take 5-7 seconds to open, with no load on the server. I don’t want to put something like that in front of the internet. Gogs doesn’t target a low resource use case, where fossil does. And, according to their press release, it looks like gitly does as well.
I know this isn’t as turnkey as Fossil, but if you set Gitea (and I assume Gogs) to use any form of cache other than 60-second in-memory, the file tree shouldn’t be as expensive. Been awhile since I played with this, but I remember hitting and quickly resolving the same issue.
https://github.com/gogits/gogs/issues/1518 is an issue covering it in Gogs.
Looks like gitea is still in the process of working on it as well, or at least something similar https://github.com/go-gitea/gitea/issues/502
Is the caching that you’re talking about something akin to 20-minute caching in memory, or do you need to go for redis or memcached for this?
They say that you can run it at a basic level on a Raspberry Pi, but that a 2 core / 1GB RAM server is the baseline for teamwork. That’s not very descriptive, to be honest, but on my 512 MB ram server, I’m not exactly curious to find out, since fossil is currently working for me (if there are compelling reasons to switch away from fossil, I will). It could be that 1 GB is just for teams of 20+ people, or just a comparison based on bottom tier Linode instead of something like Ramnode or Vultr, but I don’t know.
If Gitea were 100% Go based, I’d suspect that the 1GB is actually plenty of headroom, but it also interacts with git, and currently has the slightly strange file tree browsing issues that haven’t been locked down yet (as mentioned in other comments).
This is in strong contrast to fossil, which explains how it an be used in a shared host CGI environment. It also explains which actions can be rather slow (mostly building tarballs/zip archives, and it offers a cache for those).
It also helps that the default page for fossil doesn’t show a directory listing, but that’s another discussion.
I just looked at gitly and it seems super shady. Author claims it’s open source, but where’s the source code? The link to file a bug is a web form, why not an issue tracker?
I’m watching and interested it, but I’m not sold on it yet. I won’t be using until it is properly open sourced, but the claims that he is making are worth like they’re worth watching.
As far as the lack of source and the use of a web form, I think it’s a bit early to call it shady rather than just MVP, at least for now. It’s not confidence inspiring, to be sure, but I’ll give him the benefit of the doubt until April 10th. Fossil isn’t going anywhere, and is serving my side-project needs just fine for now.
EDIT: It looks like one of the example repositories, from Tensorflow, has been removed in some fashion. Something does seem to be afoul.
I don’t know. To me, MVP would be a GitHub / GitLab / Bitbucket repo with source and an issue tracker, not a marketing web site. But who knows, it’s an interesting pitch in any event. If the guy pulls it off it will be pretty useful.
The price! ?
Is there something wrong with my browser or were there just three sets of numbers on that blog post? How about most downloaded apk and many other interesting numbers?
For privacy reasons, F-Droid does not count or even log APK downloads. The best they can do is count listings, lest they risk storing user data. As both a user and an author of an actively maintained app on there, I am okay with the tradeoff.
Not just you. I’m also curious about which particular apps are popular, not just the categories.
Excellent! This is what Matrix was missing