I like the shade that this disclosure is throwing:
do not write cryptographic tools in non-type safe languages
At home, I’ve been optimising my compression algorithm.
The encoder hasn’t received much attention yet, but I found a very nice trick for optimising one small part and it doubled the speed of the whole thing, putting it on par with LZ4. With tweaking it should beat LZ4 quite handily.
The decoder is where I’ve been focusing my efforts, and it destroys anything in the open source world. I’ve seen 60% faster with the same or a bit better compression than LZ4 on normal non-degenerate files, and I still have work to do. There’s definitely room for tweaking, and I want to try widening to AVX2. AVX is a bit derp though so I’m not sure if that will work out.
This is all particularly exciting because LZ4 has the world’s fastest practical LZ encoder (ruling out things like density) and I believe I’m in the same ballpark as the world’s fastest decoder (but with much less compression :<).
Next steps are to become #1 and figure out how to turn it into money.
I believe there’s a ‘documentary’ called Silicon Valley that can be a good starting guide.
I think they even had a real-world, case study toward the end involving life-like hands and cylindrical objects. Might be useful for bicycle games or something.
Good luck with your project!
I’m observing compression-related subjects from the sidelines occasionally. (Lately I started providing some moderately recent Windows binaries for some compression-related tools, that are not widely available.)
Are you perhaps unaware of encode.ru forum? You can look there for some projects that were based on LZ4, like LZ4X or LZ5, and a lot of other interesting stuff, like promising RAZOR - strong LZ-based archiver for instance. You’ll find there knowledgeable people from the field and their insightful posts.
Giving RAD a run for their money eh? :)
Sort of. I’m trying to avoid taking them head on because I won’t win that. AFAIK they don’t put much effort into having super fast encoders, which leaves room for me to focus on end to end latency.
The website of the discoverer is https://shiptracker.shodan.io/ but it’s currently dead under load.
It means you don’t have enough karma, I think.
I think it’s 50 karma, but don’t quote me on that.
Yep, it looks like 50.
One of the neatest things about Lobsters is how people can answer questions about rules by referencing published, source code.
Amen to this! I’ve had lingering questions at times, but refrained from posting because I looked at the source code!
I sometimes also do the same with Cisco Spark. Before I report an issue, I try to find relevant source code and sometimes even open a PR or two.
I sometimes also do the same with Cisco Spark. Before I report an issue, I try to find relevant source code and sometimes even open a PR or two.
Oh how I wish more people did this. Read the source, Luke!
The part that really stood out to me is:
Oh the sweet, sweet irony.
we need to go deeper with the tag suggestions.
I do agree though, it would be nice to filter out the tag suggestion type posts from the meta posts that are usually about the community itself.
All databases suck in their own ways, but historically I’ve found the hype around postgres to be pretty unreasonable. They’ve closed many of the severe operational gaps in the last few years, but you still see a lot of pain when pushed hard. Everyone eventually gets burned by the architecture’s performance limitations, multitenancy deficiencies, vacuum issues etc… It’s sort of like the redis of the relational world: lots of features, great for developing features againts, but gives SRE’s headaches when pushed into production at orgs with diverse tenant needs / high connection churn / high throughput / high concurrency / etc… Over time it will overcome these, but I don’t trust it with demanding usage for the time being.
Could you expand more, or maybe share a link, about the Redis critique? I’m curious to learn more about that.
I’m a noob if it comes to databases. Could you tell what alternatives to Postgres are there? I understand that answer is mainly: it depends. But maybe you could write a couple of “if this then that” alternatives?
I imagine that icefall was comparing it to MySQL. The best thing MySQL has going for it is that many operations teams know how to run it at scale and it scales well. Developers don’t need to spend a ton of time to make their SQL performant, they can just USE INDEX. Postgres is more featureful and “correct”, but it isn’t easy to run at the very high end of performance.
That being said, not many use cases require that high end performance and can get along quite well with Postgres.
But you need to avoid JOINs when using MySQL, especially when involving more than one table.
AFAICT, most software does not run an especially demanding workload.
When the value your software generates is high compared to the (computer-and-therefore-operational) load it generates, correctness and features are in higher demand.
This is from March 31 of last year, which means it was probably April 1 somewhere in the world when it was sent. So I’m guessing this was a prank?
Yep, it has the satire tag too.
Weird. People like this submission but not the other one? https://lobste.rs/s/drhisn/docker_project_renamed_moby
I guess it’s what people see first and at what time of day they see it?
My knee-jerk reaction is: no, Wi-Fi isn’t going away. But let me present a few Interesting Facts™ about the state of the internet today:
Taking these trends into consideration, it’s not so far-fetched to imagine a world without Wi-Fi.
Where are these facts from?
Username checks out. :P
If this sort of thing interests you and you have 25 minutes to spare, this talk by Tal Oppenheimer contains these facts and many more! https://www.youtube.com/watch?v=Vmg1ECC2r2Q
Edit: if you don’t have much time, here are some bits I found:
On the other hand, these people from developing countries probably don’t have access to super-fast LTE either, so having access to Wi-Fi would be an improvement for them.
This number is lower in developed countries (10%-20%) but is increasing rapidly.
Where do these numbers come from? They seem pretty high given that free Wi-Fi is everywhere nowadays. Are there any “heavy” internet users in that group or is it just people who replaced SMS with WhatsApp and aren’t even aware they’re on the internet now? Those people never really needed Wi-Fi anyways.
You would be quite surprised. I’m in Kenya now and the 4G here is quite a bit better than my Sprint/Tmobile connection was in the US.
It’s kind of weird that in a rural Kenyan farm, where maybe 4 people in range of the cell tower have smartphones, I can get 4g.
Even if your numbers are accurate, you could just as easily theorise that WiFi usage will increase in developing countries as demand grows.
I really doubt that anyone would want a phone with no wi-fi. If it were the case that you have absolute 4G LTE coverage everywhere you go(with the right modem to support the different frequencies in different parts of the world) and have no data cap at an affordable price, then maybe.
I don’t know about other people, but if a phone has no Wi-Fi chip on it, I’m not going to even consider it.
Do you have any examples of manufacturers not putting a Wi-Fi chip in a phone nowadays?
Reading Tarn’s interviews makes me always want to become a video game developer.
“But if it were something harder, like, what if the price of teleportation is uncontrolled nausea for a week and you lose a quarter of your blood, or something like that? I don’t know how much blood people can live without. But you’re just completely out of it for a week or a month. There’s still cases where teleport is valuable. So then you need to teach them sort of a cost/benefit analysis type thing. Which, I don’t want to be too flippant, but it’s not much different than having a different movement value for a forest than a grassland. There’s a cost to this movement, and the cost is, ‘how much do I value my blood? And how much do I value not being sick all the time?’”
The flip side of this is that once you see Dwarf Fortress for graph traversal and topological sort, it loses a lot of its magic.
Physics story time!
In quantum mechanics, there’s this thing called the Schrödinger equation. As an extremely oversimplified description, it says that you can describe the entire quantum in terms of the “Hamiltonian” operator. It’s a nonlinear partial differential equation, so really messy to work with, but hypothetically you can reduce everything in quantum mechanics, classical mechanics, chemistry, biology, weather patterns, etc to solving the Hamiltonian. That doesn’t mean, though, that it’s easy. Here’s roughly where we are in terms of complete solutions.
Even with a single unified equation, you very quickly hit systems where you’re pretty much stuck. And that’s just three particles! Once you give up analytic solutions, you’re now in a world of emergent phenomena, where small quantum rules avalanche through a system and lead to bizarre macro-level properties. For example, if you model a metal as a free sea of electrons and add a slight force coming from the ions in the lattice, you suddenly get “forbidden zones” of electron energy, aka band gaps. Then that cascades to make insulators and semiconductors possible, which cascades into transistors, which cascades into, well, computers. So a very slight change in the electron model gives you a universe where I can ramble about my undergrad classes to a complete stranger who may or may not be on the other side of the world.
Dwarf Fortress might just be graph traversal and topological sort. Glass is just a bunch of harmonic springs. Weather is just Newton’s equations spread over a lot of particles. Doesn’t mean that we understand it, can predict it, or don’t find it mysterious and full of wonder.
Funny, that the same 1-2-3 pattern holds just for Newtonian gravity an orbits.
Single object in empty space: trivial.
Two objects: Kepler’s laws hold precisely.
Star-planet-moon: Well, up to some approximation…
Three stars of comparable masses: oh no not this.
But does having a simple structure underneath weaken or strengthen magic-ness (especially if the details in the next level are carefully thought out)? After all, a digital clock is less magic than a digital clock running on Conway’s Life.
That’s probably a matter of perspective.
Is that different from seeing human relations as applied decision theory?
Which immediately suggests that Tarn should add in irrationality and biases to dwarf logic… assuming he hasn’t already.
Losing my blood probably will hurt a bit immediately and may have serious long-term impacts, but those are quite a bit more difficult to measure so let’s assign that negative value at 1/10th its actual cost.
To be fair, we don’t know that the Universe we’re currently in isn’t much more than graph traversal and topological sorts.
What is the source for that, if I may ask? Not that I doubt you, but I’d be interested in explanations of how DF works under the hood.
I am not really a fan of submissions about Terry–feels too much like looking at a person with a problem and gawking.
By all means, submit content about TempleOS or his technology, but this sort of thing is not really kind. Also, tag it properly–person.
Ah, didn’t realize there was a person tag. I’ve added it. I see what you mean about “gawking"—I’m not sure there’s a way to get around that. I think TempleOS is incredibly interesting, but I don’t think it can be ever separated from him, it’s in a way a study in person. Thinking more on it, that does really make it hard to approach it in a way that doesn’t feel like a zoo.
Yeah, I think the author of this article was wrong to write it, for that reason. Not all interesting things are meant for others to consume, you know?
It does feel like gawking into a person’s life, but on the other hand knowing some context on him as a person does provide context to the OS and how he talks to go.
[Comment removed by author]
Site offline? I couldn’t find a google result that was obviously it and wayback machine didn’t have it.
It’s a pun on the website having word “driven” in it.
Ahh, my head, whoosh!
That was classy, sir!
I’ve experienced my fair share of people leaving because of management.
I interned at a big investment bank in their technology division. I was on the front office software development team. Initially I thought I “made it” and would get to be in the most prestigious part of the company.
Overall my project was awful, the internship was full of “you are lucky to be here” meetings, and 3 teammates out of a team of 9 quit during my summer. I did not accept a full-time offer from that company.
Also the company had an awful trend of new upper management turnover. Every ~18 months the old upper manager would either leave to work at another investment bank or get promoted. Then, the victor of an office politics bloodbath would emerge as the new upper management manager. Said manager would then feel the need to push all of the teams into a brand new direction, canceling all new software development and making everyone else focus on a new buzzword. 18 months later the manager moves on, a new one comes in, and the cycle repeats.
Honestly, I’d suggest Elixir, which is a language that runs in the Erlang VM and has easy interop with Erlang. It’s great for building distributed systems, and is actually a lot of fun to program in.
It’s pretty fast to develop and iterate on it, it has a lot of really mature Erlang and Elixir libraries you can use.
I’ve been searching for an excuse to try it out. One of our team members did a lot of Erlang.
Would the blockchain here be used as a storage database of some sort?
It seems like the concepts from TAHOE-LAFS where people give up space and some computing power for a “distributed cloud” would work a little better here, especially considering the awesome drama going on in the Bitcoin community about their block sizes.
Overall it does seem like a pretty cool idea, and i guess having a distributed index that any search engine system can access would make it possible to have multiple implementations of search engines.
It is not exactly blockchain - but basically imagine a few things:
1- A global tree that is the index of keywords with sites, etc.. We won’t need to retain the actual content of the site only enough data for the indexing and retrieval of data.
2- Whatever additional input that is necessary to support the learning algorithm that runs and serves queries.
That is pretty interesting. You never think of Google as a PaaS or anything like that, in my experience.
I just checked the prices for Google Cloud Platform offerings and it seems, at a glance, that my company would actually save a little bit if they switched from AWS. Doesn’t mean we’ll switch or anything, but it’s a good alternative to have in mind to prevent vendor lock-in.
Well there is always risk of vendor lock-in especially if you start relying on Google specific technologies Google BigQuery , Cloud BigTable, etc..
I think being mindful of whether you are using a vendor standard or industry standard, and whether a vendor solution gives you an edge, and the cost of a vendor’s lock in will help you make the right decisions along the way.
I’m super interested in this but it’s hard to follow. I really wish he spent more time elaborating, and editing. Took a few reads to even figure out why he would want to do this “and elimination”. I will be honest I’ve never proved anything in my life but I feel like this article would be near useless to anyone who has. So who is its target demographic, it seems too complicated for a beginner , but structured like its for people who know nothing.
It’s useful to someone who knows some formal logic but doesn’t know how that translates into programming. And elimination is a very basic operation when doing proofs. If you want to understand this I’d recommend reading a bit about formal logic first, try to understand formal logic on its own terms before looking at Curry-Howard. A quick google found me http://philosophy.hku.hk/think/pl/meta.php which looks vaguely reasonable.
Practically I’m not sure how many people will know formal logic without knowing anything about programming - philosophy majors maybe? - but also not everyone who knows about both will make the connection between them.
In University I had a Discrete Mathematics course that was very much proof and formal logic oriented.
Unfortunately, the class was taught in the Math department and not the Computer Science or Computer Engineering departments, so the class never really touched up on the intersection of formal logic and CS. I wish there was a follow-up course that would build on top of the purely math based one, but unfortunately that never happened.
Did you find the article useful then? Sounds like you’re exactly the target audience.
I see, I was coming from the opposite side :P. Makes sense.
I wrote a post on a similar(-ish) topic a while back. You may find it interesting.
On the one hand I do find it interesting, on the other the article is somewhat hard to read, and I genuinely found it slowing me down quite a bit. Words are used frequently but never defined or defined much later, such as “Connective”. For example, you define “holes” but we don’t figure out what that word means till much later. What does the “ | ” represent, seems like everything has one? In what way is the Unit “Useless”, and what does that mean? :( I’m sure you worked very hard to try and make this article accessible, but it’s still quite far from the layperson to be able to casually read it. It’s closer than most I’ve seen though, I think with some editing, more use of common English, and a bit of elaboration it could be MUCH simpler.
Those tags seem very flame-war inciting.
I used to be a really heavy vim user, I used to use it for everything, from note taking in class, to a full IDE replacement. I started to do more and more things in vim, including trying to run a zsh shell from within it, but vimscript was not doing it for me.
I decided to give emacs a shot. I started off with a base configuration and hated it. I also tried out Evil Mode, and I didn’t like it too. I didn’t feel like vim keybindings were that powerful, and decided to skip Evil Mode and fully immerse myself in emacs.
Now I actually use Emacs-Prelude as my base emacs configuration, and I still have my own small snippets and minor modes that I can’t live without. I really did embrace the Emacs OS idea, and most of the time I have Emacs on one monitor, and Chrome/Slack/terminal on the other.
To be fair, there are a few things that really annoy me with Emacs. If you accidentally run a command in an emacs buffer that will produce a lot of output, the output prints really slow and the whole process basically freezes. I wish there was a permanent fix for that, but oh well.
Emacs and Vim are both great editors for different reasons - the extensibility of Emacs and the speed of editing with vi. We need to unite against the real enemy: nano users. Those guys are proud to be stupid ;)
I thought you were going to suggest going after Sublime Text users instead :)
It’s a guy giving a talk about Emacs at a Vim meetup. I was watching the video waiting for him to be lynched ;)
Evil Mode is a vim mode inside Emacs, so relevant to both.
Round 1… FIGHT!