The war is real, but it’s not new. Most of these cross-platform plays emerged in 2011 (that’s 7 years ago, if you’re keeping score). Spoiler: Google still retains control of Android.
While some came out 7 years ago, they weren’t ready (and still probably aren’t even now). Flutter is Beta and React Native still has some sharp edges. I think the authors points still stand.
Me too but maybe different reason. I still love after all these years how Internet brings us all together from many places. If I knew about all the Canadians, might have tried to get together in chat or on a game to see what we might learn of each other’s culture and such. I always do that when I meet people in online games from other countries.
I used a raspberry for this until I took a hard look at what I was storing. Became more digitally minimalistic and have been happy using a private gitlab repo with git-secret for encryption.
I find this to be a very poor argument. In this over simplified case we will have an answer (“Toronto”). If it was a human who made the mistake we wouldn’t even have that. Also we can then make improvements to the AI to prevent “Toronto” to prevent another occurrence.
Overall AI doesn’t have to do a task perfectly to be preferable. It only has to be better then the average human.
It only has to be better then the average human.
No. If a human screws up and kills someone, they are likely to go to jail. If a robotic car kills someone, the car won’t go to jail, the people who made billions from the cars construction will not go to jail, the engineers who wrote the awful software will not go to jail, and the person who was irresponsible enough to use a self driving car will not go to jail. It had better be perfect.
From my experience, creating recursive data structures is quite difficult and requires a good handle of the borrow checker. I had a similar experience with early success and sudden halt when I reached this point. Its also difficult to know after you have something working if it would be considered idiomatic. I don’t see this as a fault of the Rust language or community.
Great is probably a strong word for a language that doesn’t support parallelism (unless you are on on alternative runtime). Maybe guilds will fix this but they wont be ready in the near future.
-I do enjoy and write quite a bit of ruby
I agree. Ruby has missed the train with its close to no concurrency support. Also, its hyper dynamic and flexible nature, which seems like a productivity boost in the beginning, often end up being a maintenance nightmare, especially since software is getting more and more complex every day.
I also write a lot of Ruby for a living, but I am not blind.
So I’ve been encouraging people to switch to thinkpads.. but since they’ve gone off trying to replicate the Macbook, what hardware do people recommend?
The T460 I think is probably going to be my last thinkpad if they stay going this “no FRU” path.
The issue for me with the XPS is that they don’t offer a non-touchscreen version with 16gb Ram (stuck at 8gb).
Provided you’re comparing to the 13" macbooks.
The current selection in 10-11" laptops is disgraceful. I can’t find anything that has enough RAM and won’t tip backwards, other than the macbook air and macbook 2015.
Yup! And there’s a version that comes from the factory running Ubuntu because it’s part of Dell’s Project Sputnik.
It’s pretty nice, barring some really annoying design decisions:
I personally don’t think there has been a great ThinkPad since the T61 (2007). I used mine until late 2013 when I got a MacBook.
I wish they kept the legacy going, those were some truly beautiful laptops.
I’ve been really happy with my Surface Book. Wonderful screen, touch is one of those little things that you don’t use much but it makes them better when you do (likewise the pen for signing PDF forms), keyboard feels great to me (but I like a light touch and short travel, others may disagree), first-party dock is immensely practical, battery life is plenty, other specs are good enough.
No. Was planning to try FreeBSD on it but then I found WSL worked really well for what I needed and I couldn’t be bothered. There’s a community on reddit (SurfaceLinux) and I’ve heard some positive things, but don’t know the details.
HP Spectre 13 came out as my vote of choice recently. Very happy with it. Best keyboard I’ve had in years, and it’s blooming quick too.
I don’t think the problem is the hardware. There is lots of great PC hardware out there. Maybe not comparable on build quality, trackpad, and battery, but hardware that has other things going for it.
The problem is that there is no desktop OS that compares to macOS. This is especially true for laptops.
I use mac os for work and windows 10 at home. I really don’t see any real difference beyond user preference.
What about user experience, intuitive interface and general better design?
I don’t use windows but I help a fair lot with their windows machines, and nothing feels smooth, intuitive. The only thing I like is the combined menubar+dock. I loathe the macOS dock.
I don’t think anything about windows is unintuitive. Windows acts largely like it has forever (aside from the Windows 8 start menu/metro thing). There’s nothing difficult about it. The macOS dock is bad, and I also think the launchpad is terrible. Finder is slower on my 2015 mbp (512gb/16gb/i7) than Cortana/search is on my Windows 10 desktop (512gb/8gb/i5). Not very much, but it’s noticeable. Both of them are SSDs.
General better design is completely subjective. I happen to prefer Windows 10 looks to macos. Different strokes!
Building a replacement for MDP (terminal presentation tool) in rust. I have code blocks with syntax highlighting and most of the element types sorted, now working on parsing and auto margins.
Trying to decide if Tables, Footnotes, etc should be in first release, open to opinions.
Article about avoiding page shift that shifts horribly, from a web design magazine with a terrible design. What happened to Smashing Magazine, I remember it being pretty good.
Ha ha, yeah. It did that for me about 5 seconds after load, right about the time I stopped watching for shit to randomly jump around and got to reading.
I wrote this as a side project to connect webhook(and straight http endpoints) together via configuration. It runs on Lambda(using the Serverless framework) so most usages are free. The JSON transformations / filters are JMESPath based which makes them quite powerful.
wrapping up Webhook-Liason(https://github.com/davidhampgonsalves/webhook-liaison), a configurable filtering/transforming Webhook proxy.
You really don’t want to use Datomic. Doing history via stored procedures in PGSQL is nicer and more efficient.
I’ve gotten a really bad feeling about Datomic when I’ve looked at it before, but I’ve never actually read through enough docs/played with it enough to form an opinion. Would you mind listing why not to use Datomic? My guesses (and they’re just that) are that the performance ought to be atrocious, the amount of client-side logic should place a lot of load on network traffic and allow for utterly different behaviors between different client libraries, and scaling horizontally/sharding should be incredibly painful, but I emphatically do not know enough about the details to know whether these are valid concerns, or whether they’re addressed sanely elsewhere.
Datomic doesn’t change anything about how you scale because you still have a storage layer you’re writing to behind the transactor. What it does do is add unpredictable client caching behavior, query thrashing of the cache, and a slow-as-fuck transactor on top of whatever storage backend you’re using.
This on top of not having basic, obvious features any database should have like the ability to set a fucking timeout for a query.
Scale doesn’t matter if you’re 10,000x slower and less reliable than the competition, even though Datomic doesn’t actually do anything about scale.
Mostly when I hear about Datomic is either praise or FUD, but your concerns are very thoughtful.
The client-side logic problem is “solved” by only having one client, the JVM one. For any other languages you have to use a HTTP API.
I would love to see sharding, and in one case in particular where I’ve used Datomic, it would be dead simple since all my entities were structured under a “root” type entity (an event, like a conference etc) so one could shard on that. It is a bit annoying that if I have a long-running transaction for one event, it would block writes for all other events while it is processing, and I know that they do not share any data (read: do not need consistency). One could use multiple databases, but then you would have to juggle multiple HornetQ connections.
Hey, author here :) That sounds super interesting. When I’ve tried doing stuff myself, it was via transactions and a lot of manual work, where I typically ended up with a versioning scheme with strong ties to my table layout, where I really had to think about what I wanted to be versioned and not.. Do you have any more information about how one would go about doing it with stored procedures in PGSQL?
The SPs themselves aren’t interesting.
The trick to making JOINs not be insensibly slow is not to record single dates for events, but to use ranges. Make an update into an update/insert which caps off the valid_to of the previous state and inserts a new record that has a valid_from of now() and valid_to of lolwhenever.
Lots of fintech (including GS) companies use this approach happily.
But seriously, don’t use Datomic.
So if I understand you correctly, you end up running queries where you pass in a timestamp that is used for range queries so you only get the records where the timestamp is within from/to? So when you change a record, you copy it and give it the appropriate from/to range?
Pretty much. There’s deeper cleverness you can engage in here, but this solution was several orders of magnitude faster than Datomic with a trivial impl anyway.
With Datomic you get a snapshot of the entire database(relations included) as of any transaction time. I don’t think you can achieve this in PGSQL without knowing the database structure and it affecting all of the queries / subqueries you write. Also while history is a cool feature of Datomic it also has a pretty unique scaling model, adds schema back to NoSQL ideas, supports dynamic attribute creation, represents everything(including schema) as data and has a powerful query language(datalog).
Also while history is a cool feature of Datomic it also has a pretty unique scaling model, adds schema back to NoSQL ideas, supports dynamic attribute creation, represents everything(including schema) as data and has a powerful query language(datalog).
Why are you database-splaining the product to somebody who’s used it in production and written libraries for it?
It’s really slow and poorly engineered. NONE OF THE BUZZWORDS MATTER. None of them.
None. None at all. A poorly engineering product is poorly engineered no matter what the design is. Datomic is a labor-sink for a consulting company. We even tried to pay them to fix the obvious functionality gaps on contract and they wouldn’t do it.
Please do not reply to me with more of their marketing vomit.
I’ve used and designed a event-based stores and historical databases a few times throughout my career. Datomic was the worst I’ve used by far.
I would love to read more about your experiences! Do you have any blog posts or something around? If not, can you write some? :) And the more specific, the better!
Sad to hear, satomic and the datalog queries seem so interesting.
Was davidhampgonsalves supposed to assume you’ve used it in production and written libraries for it? I don’t get the snark.
That wasn’t marketing vomit, it was from my own experience with Datomic and also negating your claim that storing versions on rows in your tables getting you comparable functionality to that of Datomic (personal performance claims aside).
my own experience with Datomic
I built the backend to a LIMS from scratch (with coworkers, not alone) that went into production in a CLIA certified laboratory. We were legally obligated to overwrite/delete no data and be able to recall the history of anything that passed through our lab upon demand by inspectors.
That we used Datomic was 99% my fault, otherwise my coworkers wouldn’t have heard of it. Yes, fault. It was a huge mistake and I should’ve listened to my coworkers. We spent ~6 months after the initial build-out trying to paper over Datomic’s problems, including alternating between desperate begging and offering to throw money at Cognitect to fix their bullshit. That was when we realized the product was a labor dump for when they didn’t have contracts for all their people.
What’d you do?
Can I suggest you write up your experiences/problems calmly? In this whole thread you’re throwing a lot of anger, swearing and ranting around. I’d very much appreciate seeing a clear, detached, credible writeup of the problems in a blog post or similar, and would likely find that a lot more convincing.
Datomic has unlimited horizontal read scale due to the library executing the queries and immutable mirrors of data. I am not sure if PGSQL can do that, though I don’t doubt PGSQL will be faster in other cases.
Datomic has unlimited horizontal read scale due to the library executing the queries and immutable mirrors of data.
Yeah this is nonsense and doesn’t really matter because there’s still a storage backend you’re querying. The client cache is not a panacea. You’d be shocked how slow that shit gets when it keeps churning the client cache to troll the data.
You can’t even bounce the fucking client if it hangs on a stuck query (this happens a lot) without restarting the entire JVM.
I am not sure if PGSQL can do that, though I don’t doubt PGSQL will be faster in other cases.
https://www.facebook.com/notes/facebook-engineering/tao-the-power-of-the-graph/10151525983993920/
I think ideological discussions or projects relating to AI and ethics are silly at this point.
We are so far away from AI’s that could operate in roles in which ethics would play any part that making claims or statements regarding such, seem to be without merit or reason.
I imagine the thing that has prompted this question is the development of self-driving cars and the following scenario: your car is driving you down the road, there’s an 18-wheeler barreling towards you in the wrong lane, and there’s a group of nuns/toddlers/suitably innocent victims on the sidewalk. Your car must either get hit by the 18-wheeler, ensuring your demise, or run over the bystanders on the sidewalk. What does it do? Why? What does whatever decision it makes mean?
I suspect that right now whatever decision the car made would be an artifact of the information it had at the time, not an attempt to weigh your life against those of a gaggle of adorable schoolchildren. I would guess that either (a) it doesn’t consider the sidewalk a valid route to avoid the oncoming truck, and you die because the car’s programmers didn’t equip it with lateral thinking for accident avoidance, or (b) it doesn’t recognize pedestrians on the sidewalk as such, and hits them because they appear to its sensors smaller and less solid than the oncoming truck.
Which is why I think self-driving cars aren’t a particularly good idea. Humans should be making those ethical decisions, not machines. I can decide to sacrifice my life or not, I don’t really want to give a machine that ability.
Also, machines are defective. I don’t believe that any self-driving car will be completely invincible to hacking, nor that they will make the right decision in every single ethical case.
Which is why I think self-driving cars aren’t a particularly good idea. Humans should be making those ethical decisions, not machines.
If a human would make the decision deterministicly, we can program a machine to do the same. If a human would not make the decision deterministicly then I don’t think we should be trusting them with it. Humans are just machines that happen to be made of meat in any case.
Also, machines are defective. I don’t believe that any self-driving car will be completely invincible to hacking, nor that they will make the right decision in every single ethical case.
Sure. But I can readily believe they will be safer and more ethical than the average human driver.
Working on security conscious location sharing app.
API written in clojure w/ redis and app is react-native w/ nuclear.js flux implementation using immutable data collections. Clojurescript with react-native would have been more interesting but didn’t want to introduce another unknown.
I’m going to be scraping eBay for images, which turns out to be a pretty clever thing to do because there’s a strong incentive to get the labeling right, and produce clear, high-quality images. No credit to me for the idea, because someone else came up with it. So now I’m struggling with Nix and python’s virtualenv, and … I like Nix a lot, but I’m definitely running into some rough edges. Hard to bill for time spent on self-inflicted yak-hair.
Otherwise, getting the resume in shape, poking around the Toronto tech hiring scene in a desultory fashion. Hanging out with the baby. Trying to figure out a way to rehabituate myself to working out, after a year or so off following injury and extreme laziness.
getting the resume in shape, poking around the Toronto tech hiring scene in a desultory fashion
That shouldn’t be too hard. Whenever I poke around to see what’s up, TO seems to have a more lively scene than Mtl. Not to say that Mtl’s is lacking, but looks like some of the most interesting work is in TO.
I’m just having to downshift my expectations after almost 15 years in San Francisco. I worry that I have overspecialized to a point where I am simply no longer of general interest to many places I’d like to work. Cry me a river, right.
Shopify has offices in TO and Cameron Davidson-Pilon works in that company. He presented during Pycon.ca. This seems to be your sort of thing?
There’s also the medical business which needs lots of fancy image processing. There are probably some labs that work with the likes of Sick Kids. I used to work at an MRI image-processing shop in Mtl.
Actually, there’s a Centre for Computational Medicine at Sick Kids that looks interesting. Our daughter is, in fact, a sick kid, so we’re down there a fair amount.
Just moved back to TO from SF (also hanging out with a baby) and there are a couple of interesting companies that will take you you lunch for an intro on http://lunchcruit.com/.
Also https://hired.com/ looks interesting and just launched for TO. Let me know if you want an invite.
Thanks. I apparently signed up for hired.com at some point in the vague past. I guess I’ll give it a shot later.
I think the case for Relay, Falcor and GraphQL is better made without hanging it on REST’s deficiencies (imagined or not). Reminiscent of NoSQL vs Relational. Different tools, different problems.
Enjoyed the article though.
This is essentially saying: do not ugprade anything, keep your old libs riddled with vulnerabilities and keep running your app on old software riddled with vulnerabilities.
This is not a good advice. Really, it’s not.
If you don’t want to maintain your project, then stop maintaining it but let someone else take care of it. Do not let it die.
Moreover, if you don’t want it to take too much of your time, then use a PaaS. At least your app will run on up to date software without you doing anything.
I think what we are disagreeing on is the size and value of the personal project. I currently have 15 personal projects of various vintages. I would be crippled from working on new things if I continued to maintain them. They are not valuable enough to give to a new maintainer but I don’t think they should be killed.
What this suggests are realistic options for retaining the value of the project without becoming time sinks.
PAAS’s are great, but not in the long term because they deprecate portions of their system which forces you to move. In the worst cases they force API changes (GAE master / slave).
Reverse engineering and suppressing my cat litter box’s DRM to allow me to refill the soap cartridge.
This is simultaneously a cool hack and the most depressing sentence I’ve read in weeks.
Agreed, when I first bought it I thought that it was just tracking uses to be helpfully but it basically becomes a brick after a set number of washes with each official cartridge.