You nailed it. It’s still a clever implementation of the concept given it can use cheap, 3rd-party parts. The mind-blowing aspect of it for non-hardware experts is also good for getting more people to try a project to learn hardware. I’m keeping this one since it might have that effect.
It’s like someone looked deeply into my past.
One thing I disagree with: Why not YAML? I like using that for gamedev, plenty of libraries to handle the parsing for you and minimal bullshit. After all, you need something to define your entities in and chances are that you are using something where it is easier to reload a text file than dynamically recompile your code.
Lua! Reloading a text file and dynamically recompiling your code are equally easy.
json.decode('stuff.json')
loadfile('stuff.lua')
The only caveats being the Lua file can technically do whatever it wants to global scope. On the flip side, who cares this is way easier.
The only caveats being the Lua file can technically do whatever it wants to global scope.
Eh; it takes about three additional lines to sandbox it so it can’t touch global scope. <3 Lua for this kind of thing.
From experience of integrating Lua in two different projects getting the VM 100% watertight is easy to get wrong. When you get a void* from the VM it can really be any pointer you ever passed to it, so if you have two different kinds of structs you expose to Lua, you’ll have to add magic numbers to catch abuse. The default module loader uses the file system so you’ll want to replace that with your own (with little docs and example code on how to do that). More troubles I remember: Having multiple wrapper objects for the same native object. Having to maintain lists of Lua objects within the VM but hidden from usercode to loop over all X to inject data from the native side / run callbacks.
It was very nice in the end. I even had non-transitive imports (A imports B, B imports C, B can use all objects of C in its scope but only the globals originating from B are visible from A) But it took many many hours and I’m not quite sure if it saved time overall.
Takeaway from OP is probably: “Just use Unity/C#” Personally I like following up on all these interesting problems like how to integrate Lua despite it not being the best thing to do right now because my actual goal is exploration of technlogy, not making a game.
When you get a void* from the VM it can really be any pointer you ever passed to it, so if you have two different kinds of structs you expose to Lua, you’ll have to add magic numbers to catch abuse.
I just make sure all my [light]userdata have metatables. Before I cast them in C I check the metafield __name is what I gave in luaL_newmetatable. You could thwart this by changing the metatable name to some other valid name, but there isn’t really a way to do that on accident.
I even had non-transitive imports (A imports B, B imports C, B can use all objects of C in its scope but only the globals originating from B are visible from A)
Pretty sure this is standard practice. It’s described in PIL chapter 15.
PIL suggests achieving this by explicitly listing every single symbol one more time at the end. I didn’t want that.
ugh, 32-bit compatibility? I can’t see a lot of use for 32-bit x86 compatibility as it seems the push is more and more towards 64-bit (x86_64) applications.
Maybe the thinking is that x86_64 applications, exactly because they’re more recent, are likelier to have devs that can recompile for ARM.
I think it’s pretty naive personally to assume the biggest market for the MacBook Pro is developers.
The biggest market is for developers and designers… It’s not the only one, but it is certainly the biggest one.
I’d suggest that the most vocal market is developers and designers. Remember that there are a lot of business and home Mac users who’ve never performed either of those aforementioned roles…
I’d ask the inverse. Like you, it seems to me that there are more devs and designers using Macs than the general population, but we ARE devs! Thus we fall prey to implicit bias.
I’m not really sure I’m suffering from much bias. I don’t own a Macbook Pro and I don’t know anyone personally that does. I’m just betting that the segment of people who call themselves “power users” outnumber actual devs.
How much memory does a dev need? I’m interested in other people’s RAM breakdown.
Mine: Editor: 100 MB Terminal: 20 MB Desktop system: 200 MB (Entertainment/Browser: 4 GB)
The issue is not how much memory do we need today but buying the extra RAM to give the machine extended life. Today 16GB is enough for me. But what about in 5 years? I buy Macs because they are solid machines for far longer than their PC counterparts. When I can’t order more than 16GB of RAM in a $3,000 machine, that means I probably won’t get as much life out of it.
Put it this way; the 4 year old machines these are replacing were also capped at 16GB. Four years ago 16GB was enough headroom to future proof the machine. Today it isn’t.
Let me turn it around:
Apple laptops with 32gb of ram are going to be (very) rare in 2 years. Makers of software for OSX are overwhelmingly using apple laptops. This is going to create some backpressure against the pattern of ignoring how much ram your code uses.
Could someone explain exactly how this works? I have a ticket from Millbrae to San Bruno, my friend has one from Pittsburgh/Bay Point to North Concord. So I’m going to end up at San Bruno, and my friend is going to be at North Concord. How are we supposed to swap tickets when we’re not at the same station? Also, even if we did swap tickets, wouldn’t we get kicked off the train since our ticket origins are wrong?
Disclaimer: I have never actually ridden any form of public transportation in my life, so I don’t know how BART train tickets work. I assume that’s the part that’s confusing me.
Tickets are only checked at entry and exit, so even though one traveller (A) has bought a ticket for Millbrae to San Bruno, he/she doesn’t get off at San Bruno. Rather they stay on until they intersect with the other traveller (B), at which point A and B exchange tickets. Traveller A now has a ticket that allows exit at North Concord and traveller B has one that allows exit at San Bruno.
Does that make sense?
Code reuse via shared libraries seems like more trouble than it’s worth, given how cheap storage has become and how little code actually gets shared in practice. I prefer the Go approach of just statically linking only the code that is actually used.
Shared library is not only about storage. It gets really usefull when you have to apply security patches. Static linking involve rebuilding everything to pull in dependancies changes vs updating a a single package and be done with it.
In theory, yes. In practice, you need to update several versions of each package because of the problem being discussed here.
Libraries are not so much about reducing storage requirements but about having a clear boundary where you can plug a new implementation of the same API. Static linking takes that possibility away from the user.
That’s an advantage of dynamically linking your libraries, but not why the industry started doing it. Saving on disk use was a huge factor in the design of software 20 years ago - see also e.g. installing packages system-wide instead of per-user.
I was gonna point out that similar stuff has been proposed before (Web Intents, etc.) but it seems the other posters in this thread did exactly that.
tl;dr find strongly connected components, hash all nodes within a component (I don’t know how the author expects this to work deterministically), then form a hash tree of the components
If this is actually intended for a social graph as the example suggests then I’m afraid it will very quickly degenerate into a case where almost everyone is part of the same SCC.
It sounds like Data.Graph gives that determinism:
Note that the
<+>
operator is not commutative, so the order in which we do the fold is important. Fortunately, theData.Graph
library takes care of that for us. The nodes returned in a component are sorted in the order we need.
Well, conceptually a graph is a set of nodes and a set of two-tuples representing edges. But in code the sets are probably represented as list (with fixed iteration order). So I guess you have “determinism” across runs of the program because the input doesn’t change. But if you provided the same graph with the nodes in a different order then I think the output of the ‘flatten’ operation will change as well. What is the “canonical” order of nodes in a SCC anyway?
I just realized, while you can’t find a canonical order for nodes, you can very well find one for hashes over the primitive fields. They are just integers. You can then replace all references by indices into this sorted list and then hash the whole thing.
Why is there a component step? The process could be to take the primitive fields and generate the primitive hash of each node. In the dependent fields, each node refers to the primitive hash of the nodes it depends on. The primitive fields (or hash) and dependent fields are hashed together to create the dependent hash. Only the dependent hashes are available to the outside system.
I’ve been mentally doodling boxes and arrows for a minute without spotting a way this would fail to capture a change or introduce spurious changes. But I am not a crypto expert and maybe there’s a neat corner case I’m missing someone could suggest?
Consider these two graphs:
G1: A = (1,2,B), B = (3,4,C), C=(5).
G2: A = (1,2,B), B = (3,4,C), C=(999).
In the dependent fields, each node refers to the primitive hash of the nodes it depends on.
A’s dependent field shall now refer to B via B’s primitive hash which is the same in G1 and G2. So your dependent hash of A will be the same even though a resource reachable from A has changed.
Aren’t “insanely fast” and “plugins communicating through pipes rather than scripting” contrary design goals? For a plugin that, says, reindents all the code in a buffer, I would be worried that just sending the reindented buffer content over the pipe could kill the performance benefits of a fast core engine. On the other hand, if you let people load code on the editor side, it’s easy to have this code just marshal whatever back and forth to an external process (a lot of Emacs modes do this) if that’s the best design choice.
It’s the time you have if you want to hit 60 frames per second. A.k.a. as fast as vsync-ly possible.
I find I’m a lot more productive when I disconnect my computer from the internet so it’s helpful to have offline docs.
I think software based process isolation is just as valid as hardware based isolation (MMU). Midori followed the same path.
I’d argue we’re already at the point of being dependent on algorithms for many decisions. Just shut down Google for a day and see what happens.
Behind every computer algorithm is a programmer. And behind that programmer is a strategy set by people with business and political motives.
It’s upsetting how some people deny even the existence of non-commercial programming.
The danger this author is talking about is that of an oracle. Computers were oracles before AI got big. Even newspapers are in a way because you can’t see how they operate, how they rate incoming news and aggregate it, and of course there is potential for abuse there too as many people trust their newspapers.
Also, damn click-bait headlines.
Nice point about oracles. Systems that drove our decisions that we don’t fully understand have been around for a long time. I think the difference now is both the effectiveness and immediacy of these oracles.
I hate to face the fact that many people don’t even understand folders and this desktop concept seems to cater primarily to these people. Much effort went into this visibly but I’m not convinced.
The window management idea though I like. More than current tiling windows managers anyway. I came up with something similar independently, though vertically and for windows that have their most relevant content at the bottom such as terminals or chat windows (They can be compressed leaving only the lower part)
They describe some more interesting properties of their system here: http://zerocash-project.org/q_and_a#what-about-the-balance-between-accountability-and-privacy
It can, for example, let users prove that they paid the taxes due on all transactions, without revealing those transactions, their amounts, or even the amount of taxes paid. As long as the policy can be specified by efficient “nondeterministic” computation, it can (in principle) be enforced using zk-SNARKs and added to Zerocash.
how does this compare to google closure compiler?
google closure compiler focuses on the size of js, whether prepack on the speed