This is an ideal problem for Prolog.
Running your browser as a different user is an interesting challenge. Under normal circumstances I want to save my bank statement PDFs in my home directory. I want to upload my burrito pictures to Twitter. Slicing this off to a separate user is a significant usability setback.
It does seem like a silly way to approximate better ideas in privilege separation.
Better ideas like those already implemented in Chrome, which usually uses all means of sandboxing a platform can provide? (including the sec comp sys call filtering on Linux)
Running as a non-unique unprivileged user means that user can potentially access much more than was intended. If nobody is running both your web and database servers. A compromise in either is a compromise in both.
I think my issue with the OP is more the underlying semantics of the Unix model. A “user” is too heavyweight and coarse an abstraction for handling privilege separation, and carries along with it too much historical baggage. But nobody is doing capabilities, which are IMO the correct mechanism. One muddles along, I suppose.
Creating a unique UID for an operation could be a very clean separation of privs. How is a different UID to heavy weight? The coarseness is point, it is an unequivocal separation between the main account and the account of the untrusted process.
Mount a FUSE interposer in the sandbox and all kinds of FS behaviors could be proxied through.
Unix users carry a lot of implicit assumptions about privilege with them. Have you ever tried to do complex access control with UID/GID permissions? It’s a nightmare.
In a world where the default model of computation involves a large number of actual humans proxied through Unix users logging into an 11/750 or a Sparcstation 20, maybe the Unix user model holds. In a world where 99.9999% of computers are single-user at all times, it’s way too heavy and ill-fitting an abstraction.
Anyone able to speak to pony versus erlang/otp?
Erlang. Rock solid. Mature as hell.
Pony. New Kid on the Block. Immature as hell.
Pony draws a lot of inspiration from Erlang and has drawn interest from Erlang fans.
Pony has a lot better performance than Erlang. It’s designed for correctness and speed.
Erlang has OTP and has taken care of a ton issues over the years.
Pony has a very small standard lib, its saving grace on the library front is excellent C FFI.
Pony has a great type system that helps you write safe concurrent code.
People like to compare it to Rust at times because of that.
Erlang has both concurrent and distributed version.
Pony only has concurrent at the moment although plans are in the works for distributed Pony.
Tons of people are running Erlang in production.
Pony not so much.
If you have particular questions, I’d suggest dropping by #ponylang on Freenode or checking out the mailing list.
Could you compare Pony vs Alice ?
I like Alice ML. I maintain a github repository to keep Alice ML building. Pony is entirely asynchronous. There are no blocking operations. Some features of Alice ML that Pony doesn’t have or is different from:
Some things that Alice lacks compared to Pony:
I would really like to see an Alice ML VM written in Pony that uses Pony to take advantage of it’s efficient runtime. That would make for an interesting project.
I’m not familiar with Alice
Pony looks amazing. Do you think a transpiler from Erlang to Pony would be possible? Implement the Erlang runtime in Pony?
This is a really interesting question! (Full disclosure: I’m the Pony language designer, so I have a bias).
Erlang uses immutable data types, and Pony can express those. Erlang has actors, and Pony can express those. Erlang is dynamically typed, but Pony is statically typed. However, using Hindley-Milner unification, it would be possible to generate Pony interfaces (i.e. structural types) that expressed the implicit dynamic typing constraints in an Erlang program. So far, so good.
However, there’s a core semantic difference, which is that Erlang allows a blocking receive that pattern matches on the queue. In contrast, in Pony, all actor behaviours are strictly asynchronous. This would make an Erlang to Pony transcoder tricky - not impossible, just tricky.
Implementing an Erlang runtime in Pony would be similarly tricky, but implementing an Erlang runtime using the Pony runtime (which is available as a C library) would be “relatively easy”, where “relatively easy” means quite hard indeed, but with straightforward semantics.
I think the core places where the Pony runtime would be useful to Erlang are in garbage collection of both data structures and actors themselves (unlike Erlang, which requires manually terminating actors, in Pony actors themselves are GC’d with a protocol that allows the runtime to cheaply determine when an actor has no pending messages and can never receive messages in the future) and in code execution (BEAM, while a very impressive VM, doesn’t excel at computational speed, whereas Pony is, surprisingly, a bit faster than C, due mostly to LLVM’s lovely fastcall calling convention, which allows for really good cross-procedure register colouring, plus Pony has aliasing information that’s more useful for optimisations than that available to C).
However, Erlang has things Pony doesn’t have (yet). Particularly for distributed computing (although that’s in the pipeline for Pony).
tl;dr: Pony has been heavily influenced by Erlang (along with other awesome languages, like OCaml, F#, E, AmbientTalk, Smalltalk/Newspeak, and many others), but they aren’t semantically equivalent.
Thanks for the overview.
There’s more insight into where Pony is trying to go at: https://github.com/CausalityLtd/ponyc/wiki/Philosophy
That pages covers general principles and a hierarchy of concerns for the language.
I’m happy to discuss more if you want.
By no means an expert, but the short version is that pony has the potential to be radically faster than erlang; has a relatively tiny/nonexistent library set; and is currently for the adventurous.
I bet @seantallen could give you a great overview.
I probably could ;)
The BRASS project http://www.darpa.mil/program/building-resource-adaptive-software-systems is another interesting one (not linked from the above page).
He seems like an interesting designer, as well. If anyone in Chicago would like to try out Pandante, let me know and I’ll get a game together.
I bounced off Sirlin’s Pandante (not open ended like Texas Hold'Em) and Puzzle Strike (not sure what happened, just never felt like we had the game in gear), but his Flash Duel and Yomi are two of my desert island games. I’ve played hundreds of games of each and I feel like I’m just getting started.
Just moved from Chicago to Seattle. Otherwise I would love to! By the way, you know anyone wanting to rent a house near the fox river?
Just moved from Seattle to Chicago! Enjoy.
edit: Hey! How is my comment off topic and not mempko’s ? Must be a passive aggressive Seattle downvoter.
I upvoted you to cancel-out whoever’s downvote. Welcome to Chicago. PM me if you’re looking for tech meetups or anything else.
What brought you to move to Seattle?
I was planning to move to Seattle in 2014, but ended up finding a better job market in Chicago, and I’m actually quite happy that I landed here (Chicago). With the large finance sector, there’s a very strong adult (30+) job market and it’s nice to be in a big city.
Haxe seems just as compelling as Kotlin. It has
To be clear, this is within the scope of a single Dept. of Social Services procurement for a new Child Welfare System.
This is still huge. Hopefully this will spread to other organizations. Many cul-de-sacs of government are captured by organizations that resell the same low quality custom software. Patient systems, library management, transportation demand management and often the winning bid something like 20% below the actual cost to build.
This guy seems overly emotionally invested in the internals of MongoDB.
I find Multicorn and UDFs to be excellent extension mechanisms for PostgreSQL. Whatever gets the job done in the least amount of lines. Have
A quick reading suggests that his company’s complimentary product to MongoDB is being threatened by Mongo’s cheerful repackaging of Postgres–that may have something to do with it.
I’m the author, and you’re right, I’m definitely not unbiased!
I have three main biases that I can see: first, I didn’t like the one-sided partner experience I felt at my day job; second, I was a strong proponent for MongoDB to release an Apache 2-licensed BI connector that leveraged open source work I contribute to (which does 100% in-database analytics); and third, I co-founded an open source company based on the premise that relational analytics aren’t a good fit for NoSQL databases.
So yeah, I’m definitely biased. I try not to let those biases cloud my judgement, but I’m no vulcan.
I would have a different opinion of the connector if (a) they had been 100% upfront about the Postgres database and the (severe) limitations of the approach, rather than pounding their chest and omitting the Postgres connection; OR (b) they had released their own connector (even proprietary) that properly dealt with the problem (native support for nested data structures, 100% pushdown, etc.).
They didn’t do either. Which means I can’t get behind their decision. Others may come to very different conclusions, which is fine by me. Agree to disagree. :)
Gotcha gotcha, good luck to you sir. :)
Out of curiosity–what do you mean by 100% pushdown?
Thanks for that! And sorry for the jargon.
By 100% pushdown, I mean that every query is translated into operations on the target system that run entirely inside the database. Without pushdown, you end up sucking data out of the database, and relocating it into another system which actually executes the query.
The whole analytics via PostgreSQL via FDW via Multicorn via MongoDB route ends up pulling ALL the data out of ALL the referenced(s) collections for nearly ANY possible query (!).
Which only works if the collections are really, really small. :)
Predicate pushdown is a more common name for the concept, which makes its meaning more obvious. You push predicates down the levels of abstraction closer to the data. Applying predicates reduces result set size, so the sooner you apply them, the less data you have to transfer around to other systems.
But you can also push down other operations. In addition to what @jdegoes said, this shows up a lot in big data type stuff. For example, MapReduce can be done in strictly Map / Shuffle / Reduce phases, but it’s (almost always) better to run the reduce locally on each map node before shuffling the map results over the network.
Decoupling Thunderbird development from Firefox development might not be bad for Thunderbird either, if it’s possible to do in any sort of way that doesn’t just result in de-facto killing Thunderbird. The main reason it seems to be so tied in with it is the legacy of Thunderbird having developed out of the old integrated Mozilla suite, which produced all sorts of possibly unnecessary coupling.
I would assert the coupling resulted in the rise of MIME-encoded email, causing email clients to need to display messages formatted in HTML. Certainly I agree the coupling makes more sense in a pre-gmail world, but the Netscape was trying to create an “office suite”-like set of applications for all internet protocols, not just HTTP.
I would assert the coupling resulted in the rise of MIME-encoded email, causing email clients to need to display messages formatted in HTML.
I’m fairly certain that was Outlook.
That was more RTF-based email, which they tried to hang onto using attachments.
They should move to being an Electron/NW/etc based application. Or even a web app that happens to be running locally. Over integration killed MS and it could kill Mozilla as well.
I think the runtime and the language and the OS would need to be merged. The JIT or runtime management system would need to weigh the cost of storing and retrieving the result vs re-running the computation. Computations across all invocations would need to be stored, so there would be some security issues. If you have infinite storage and the program runs infinitely long, it will memoize all possible states. Memoize all the things!
[Comment removed by author]
The assumption that people do not come to harm because of tech is precisely the crux of the matter. It is a discipline where sooner or later, your work has the potential to cause harm, and so if you’re not learning about the ethics of what you’re doing from very early on, you can easily do harm.
bridges are stable because no one cares about driving over the latest and greatest bridge.
and nobody ever wants stuff added to bridges once they are already built. Nor does the gravity suddenly change because gravity was deprecated and has been replaced by gravity 2.0 which is much better. As a matter of fact, many buildings fail to stay upright when their operating conditions suddenly change (like earthquakes or fires)
The problem behind the instability isn’t (just) budget or bad engineers. It’s the constant demand for change either to the solution itself or to the foundation required by the solution.
I think that the title can and should work for different types of software developers, and that it should require a PE-style certification. Software Engineers should be the people designing and implementing software that runs cars, elevators, healthcare systems or really any shit that can kill people. These people can’t move fast and break things. They can’t ship bugs. That kind of development should absolutely require the rigor that a PE is supposed to guarantee.
Other software developers, like the ones that make up many companies in the Valley don’t need to meet such requirements. People’s lives (thankfully) don’t depend on any code that I, or most others in SV, write. That’s fine. There shouldn’t be any shame in not being a software engineer, and SV companies shouldn’t require a PE to do work for them.
Both titles can exist, but they should absolutely imply different qualifications. Ideally, the term engineer should become less ambiguous than we’ve made it.
I looked around, but can’t find the position paper the ACM published about fifteen years ago, explicitly refusing to participate in creating a certification process for software engineers. It had some very strong words about how no such certification would be able to guarantee or even measure the things that certifications do in other engineering fields. Given that it would be a liability-shunting fiasco that made everything worse by creating a false sense of safety, they felt they shouldn’t be involved.
I notice that they have a professional code of ethics, today, but that’s it.
I’d love to read that.
Here you go - http://www.cs.cmu.edu/~Compose/bok_assessment.pdf
As background, this article is largely the ACM side of a 1999 disagreement between IEEE and ACM over Texas’s move to create a category of licensed software engineers.
In 1997, the IEEE and ACM had formed a joint working group to better define the body of knowledge making up the field “software engineering”, with the main agreed goals of improving standards in the field, providing guidance to educators, and better establishing software engineering as its own discipline, rather than just a fuzzily defined variant of “computer science”. SWEBOK (SoftWare Engineering Body Of Knowledge) is the acronym for that effort, and both groups initially thought it was a good idea. Where they diverged was over the political question in 1999, when that Texas move unexpectedly arose: IEEE supported engaging with Texas’s process and thought the SWEBOK effort could be used to positively influence it, avoiding a negative outcome of Texas making up its own standards, while ACM was strongly opposed to professional licensing, and pulled out of the SWEBOK effort entirely out of fear that it might be used that way, both in Texas and elsewhere.
IEEE went on to approve and publish the document in 2004. The ACM and IEEE also somewhat reconciled over part of the original agenda, and formed a different group in 2001 to develop a set of guidelines specifically for software-engineering curricula, the “Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering”, which were also released in 2004. That side of the agenda has been fairly successful: there are now a lot of software-engineering degree programs, distinct from computer-science programs.
Saw that last night…it’s rather telling that you can’t simply just download the damned thing.
Thanks. I was young enough at the time that that background went over my head; it’s nice to close the loop on it.
Side note: Fortran and Ada are fine languages for their domains. In fact, had Twitter used Ada instead of Ruby we wouldn’t have seen the fail whale. It is a fallacy that new is better or that old is worse.
Previous post https://lobste.rs/s/mmqsoe/opentuner
Isn’t this a fuzzer? not a property based checker.
A agree that using a property based checker against a pattern matching engine would be difficult. Take a look at
I’ll respond the same way I did when this was just a blog post: without a definition of boring, this is useless. I find Redis indispensable. I understand it, I like it, and its advantages over the unstructured pile of data that is memcached mean that I will choose it for any project I work on in the future. I despise MySQL and have worked for weeks trying to fix performance issues and plain old badness (and trying to sell those fixes to people who didn’t understand what was happening…) that would not have been issues in a capable database (PostgreSQL).
When does a technology become “not boring?” I mean, Scala is in use at many large companies. It runs on the JVM, which is the definition of boring, right? Why is it not boring? Haskell has been around longer than Python, Ruby, or PHP. Common Lisp has been around even longer (almost as long as C++)! Speaking of which, why aren’t we just using C++? Java wasn’t boring in 1998, but people certainly started using it instead. Or, you know… COBOL! Good, boring history there. If we didn’t spend our innovation tokens on writing Ruby instead of COBOL, maybe we could make better software.
I think boring is relative to the team using it. I wanted to use a KV store on a project, and it would have worked wonders, but it would have been too much for the dev and ops teams. It was not the appropriate technology for that group. In this case technology can be very generic, if you are on a team where everyone is versed in compiling and parsing, it might make sense to make a small high performance DSL to solve a particular problem. Others might just use PCRE+JIT and others might just generate some PHP with Python and send it into HHVM. It all depends on how the group can adapt to using a new technology.
That’s fair, but the post mentions the burden of maintenance when the people from the current team have “moved on;” that implies a certain level of lowest common denominator programming, which I think has generally had a bad effect on our industry.
I am reminded of Fire and Motion. Microsoft may be infamous for introducing new frameworks with alarming frequency (purportedly to keep competitors off balance), but the same could probably be said for a series of blog posts about switching to mongonodelastiscala.
Please tell me more about ElasticScalaDis!
The cost of adding a networked computer to something is now low and getting lower, but the cost of making the software the runs on it secure or reliable has stayed high. With engineer salaries having grown like they have, it may actually be getting more expensive. In the long term, businesses are going to wake up to liability and customer satisfaction concerns and stop selling insecure, unreliable “internet of things” devices. But I think we’re in for a few years of zero-days on refrigerators, big invasions of privacy, and maybe some injuries and deaths before this happens.
There’s a reason they call it the Internet of Things Targets. We’ve already felt this with consumer routers.
Interestingly enough, around here we have progressed to the point where you don’t buy your home router….. You get a “FREE ROUTER” with your fibre connection.
Actually, the reason it’s free, is if you watch carefully, every now and then it quietly updates itself and reboots….
ie. The ISP’s have worked out it’s cheaper to bundle a router they can control and update, than to handle the service complaints due to hacked routers.
Alas, what worries me more about this story is the implications of it when put together with Snowden’s information.
ie. The spooks can easily move one very large step beyond just listening….
Another reason for that shift is that ISPs have started realizing it might be valuable in its own right to own & control a distributed network of access points. For example all newer Comcast routers are dual-SSID routers. One of the SSIDs is configurable by the customer as their usual home wifi network, and the other one is locked to SSID ‘xfinity’, serving as part of Comcast’s national wifi network.
I’d like to see entertainment systems standardized and shared between car manufacturers. Why can’t I just get a double/triple/quad din entertainment drop in replacement at my local electronics shop and have it control exactly the same things the previous one did?
In my 1999 car I replaced the single din tape player with a 3rd party one, but had to give up volume buttons. It was worth it.
In my 2003 car I replaced the double din stereo with a 3rd party one, but kept all functionality by getting Pioneer -> ISO -> ISO -> Holden.
Newer cars than that seem to have an all in one “iDrive” style system that controls entertainment and gps (Which is fine) but also air conditioning, electric seats, car internetting, performance mode/suspension, lap timing. I can do without some of those things, but not being able to control the air conditioning at a minimum is an absolute deal breaker. If you can live with the lose of the other things it is still going to cripple your resale value. Why do they have to tie everything in together? My friend has a Z4M. The stereo isn’t great, but there is no way he is going to throw out this sort of functionality for a better one.
I just want them to either use standards so a replacement 3rd party unit doesn’t downgrade functionality (I know car companies aren’t going to do this) or at least split up system so that I could just replace the “entertainment system” (Which would basically be the screen + stereo tuner) and the air conditioning could still be controlled through it because the “entertainment system” and the air conditioner talk to each other over a standard interface (USB/ethernet/wifi with a standard open source “car communications” protocol).
Part of the problems with replacements (in the UK at least) is that they’re easy to steal. One of the large drops in the UK crime rate is because car stereos are now integrated and difficult / impossible to casually take.
A nice(?) side effect is that when considering which car to buy next, you’re more likely to go to the same manufacturer so you don’t have to re-learn a new system for changing radio stations.
I always imagined it was because an average $100 3rd party stereo is fine for most people and will only resale for say $30, so it is only worth stealing a $1000+ 3rd party stereo. If you are stealing an original stereo it is only worth stealing it if is actually good, is usable in your car and you have/can crack the code that locks it to the car/ecu.
Depending on how you look at it, a problem on top of this is that technologies keep on removing the ability to control which version of software they run. On my Android phone, if it decides to upgrade a piece of software and I say yes, I cannot downgrade it even if there is a huge security hole in it. I expect to see IoT being even worse about this.
One of the reasons I loved OS X so much was because it had a user friendly interface that was pretty good but I could dive below it and be a power user. The mobile platforms are not catering to this at all. The counter argument is that it is better because a centralized authority is making sure everyone is up to date. IMO, there is no reason to believe that is true.
engineer salaries having grown like they have
Could you cite? I find maybe a 10% increase (relative to inflation) since 1985.
I hope not connecting stuff that shouldn’t be connected to the net will help in the meantime. Unless they carry their own gsm modules…
This is interesting but the metrics seem arbitrary.
Bugs is actually the issue count. If a team is using the issue tracker to coordinate future work, it isn’t a true measure of bugs. Teams that don’t submit issues or don’t test will have lower bug counts.
Shouldn’t you take the size of the commit into consideration? Lots of small commits can make the metric drop, and commit frequency has more to do with the group and the developer than the language.
It might be that certain groups or problem domains have a higher or lower tolerance for issues or value testing differently.
I welcome empirical research in programming languages and development and understand it is really hard. Keep it up!
I counted only bugs, not all issues, and filtered out any repo that had no bugs (i.e. are using another tracker).
You’re right about commit size, I worked under the assumption that while different teams might have different acceptable commit sizes, that would average out across many teams. I don’t really think any given language would have a tendency to different commit sizes, but that would be a great thing to check for the next one!
The code is here if you are curious: https://github.com/steveshogren/github-data/blob/master/src/github_data/core.clj
Desert ant navigation and bee navigation is fascinating! The wilson link is awesome, and do take a look at the original papers by Wehner
Also, bees use an optical flow odometer!
Basically, if you are interested in robots, you should definitely look into insect vision, navigation and behavior work for inspiration how how to build simple, robust circuits that are not general processors, but do very well in niche environments.
The Journal of Experimental Biology you linked to treasure trove of Ant research. Thanks!
Here are some Bee Optical Flow papers:
I love JEB and I also love Animal Behavior. A search for the keyword “navigation” will have you a nice stack of reading.
consider the cogsci tag
Great suggestion. Can no longer edit.
Ziggurat is a meta-language system that permits programmers to develop Scheme-like macros for
languages with nontrivial static semantics, such as C or Java (suitably encoded in an S-expression
concrete syntax). Ziggurat permits language designers to construct “towers” of language levels with
macros; each level in the tower may have its own static semantics, such as type systems or flow
analyses. Crucially, the static semantics of the languages at two adjacent levels in the tower can be
connected, allowing improved reasoning power at a higher level to be reflected down to the static
semantics of the language level below. We demonstrate the utility of the Ziggurat framework by implementing
higher-level language facilities as macros on top of an assembly language, utilizing static
semantics such as termination analysis, a polymorphic type system and higher-order flow analysis.