I also still use the 2016 SE, and this really resonates with me.
Since Apple decided not to give the SE the latest and greatest iOS, some newssites that I read have suddenly increased their minimum width, making some text fall off the screen on the right.
Nevertheless, I am committed to keep using this phone until a worthy replacement shows up. This would mean:
If me not upgrading means that I have to live without certain apps, so be it.
some newssites that I read have suddenly increased their minimum width, making some text fall off the screen on the right
Yup, and it’s impossible to sign in on some apps because the splash screen is too big and the sign in box/button falls off the bottom of the screen.
As a first-time iPhone user (work phone) this is interesting. I’ve always used “slim” plastic cases on my phones and while the 12 mini’s camera sticks out a millimeter or so it’s level with this basic case/bumper/sleeve, so assume that’s a dealbreaker if you really want to use it without anything (I find them too slippery tbh). USB-C is indeed a huge factor but I’m wondering why it’s so important? Maybe it’s because I have nearly no USB-C devices anyway (I think we have 2 phones, 1 tablet, 1 work laptop - between 2 people) so with all my Micro USB stuff… USB-C simply doesn’t simplify anything for me, except when traveling lightly.
Can’t believe I’m actually arguing pro new iPhone here but right now I am so pissed that my Android phone is not getting updates anymore.
Good to know what it takes to get something running on AWS. Running this on a LAMP stack probably takes the same amount of time when starting from zero knowledge.
I think I would prefer having a machine to log into and inspect instead of endlessly yamling and waiting for the configuration to fail and send an error back.
How do you tell whether a perceived squatter is really a squatter? Can you reliably establish whether someone is not also the owner of a result? I might have used two different registrars for the original and squatted with anonymity on both. Or I could have bought the domains I thought would be typosquatted to mitigate the threat for my users…
For example: querying google.com returns google.org as a squat, but this is a false positive I would say.
There are a number of improvements we need to add to the platform to help improve the results and make them actionable (some details highlighted in the Reddit thread - https://www.reddit.com/r/programming/comments/xolqom/comment/iq0lpmf/?utm_source=share&utm_medium=web2x&context=3).
The current version of the tool is quite rudimentary aimed to validate the idea and see if it’s useful.
Good article. The monetization of everything and having fun doing something don’t really go toghether well. I like the analogy of the home cook.
I’ve always resisted the urge of switching to a different layout than QWERTY in fear of not being able to use someone else’s keyboard. Given that I use my own setup most of the time, this fear is not really justified. But it’s kinda like buying insurance, for that one time when you really need it.
I am in front of a lot of different systems throughout a given week, so switching away from QWERTY is a non-starter for me. The best I can go is a Microsoft Natural 3000 keyboard for my main systems. Been using those or the original Microsoft Natural since the 1990’s.
This is a really cool project, thank you for posting it! However:
escape is too crucial to put on a non-base layer, but at the same time, not as important to deserve a place on the base layer.
Not important! Heresy! (JK) I was obligated to say that as a decades-long vi/vim/neovim user. Very nice project, though, seriously.
You won’t forget QWERTY if you switch to a new layout, don’t worry. Worst case is that you have to look down on the keycaps when typing…
Yes, this is my experience as well, combined with the annoyance that the keys are all in non-sensical positions.
If I have to help a colleague who is on qwerty and it takes me too much time to type something out, I will ask them to type it out.
I learned a new alpha layout at the same time as I was getting used to a small ergo keyboard (30 keys in my case). I continued using my regular row stagger keyboard during the day at work, but would practice on my new layout and keyboard at night. After a few weeks, I felt good enough to start using my new keyboard for work. It has been over a year since and I still use QWERTY on my laptops built in keyboard and my alt-layout on my ergo keyboard. Having the layouts tied to different physical key layouts has made it really easy to keep them straight in my head.
Having to sometimes give the correct answers (as the author explains a bit later) seems a bit like priming. If GPT3 learns from examples, then giving it examples will make it learn the examples and not the actual method. Can someone correct me if I am wrong?
Looks interesting, but for most code I would argue that a with
statement is sufficient and easier to understand for people who did not spend time reading up on Witchcraft.
I’m excited for this. I started looking into pijul recently, to see if I could use it as my main VCS. I think there’s some differences in how the Pijul devs think about version control – at least, different from how I do.
They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.
It seems to be very much written for people who understand the pijul internals. doing a pijul diff
shows metadata needed if…you are making a commit out of the diff?
I would think a “what’s changed in this repository” is a pretty base-level query. They seem to not think it’s especially important; the suggested replacement of pijul diff --short
works but is not documented for this. For example, it shows information that is not in pijul diff
– namely, commits not added to the repository yet.
I also want to see if I can replicate git’s staging area, or have a similarly safe, friendly workflow for interactive committing. It seems like most VCSs other than git don’t understand the use cases for the staging area.
They seem to not be too big on branches. I haven’t quite figured this one out yet; it seems pretty widely accepted in the programming world.
Curious about where you got that from, I even wrote the most painful thing ever, called Sanakirja, just so we could fork databases and have branches in Pijul.
Now, branches in Git are the only way to work somewhat asynchronously. Branches have multiple uses, but one of them is to keep your work separate and delay your merges. Pijul has a different mechanism for that, called patches. It is much simpler and more powerful, since you can cherry-pick and rebase patches even if you didn’t fork in the first place. In other words, you can “branch after the fact”, to speak in Git terms.
I would think a “what’s changed in this repository” is a pretty base-level query
So do the authors, they just think slightly differently from Git’s authors. pijul diff
shows a draft of the patch you would get if you recorded. There is no real equivalent of that in Git, because a draft of a commit doesn’t make sense.
I also want to see if I can replicate git’s staging area
One thing you can do (which I find easier than the index) is record and edit your records in the text editor before saving.
(Thanks pmeunier for the interesting work!)
I found the discussion of branches in your post rather confusing. (I use git daily, and I used darcs heavily years ago and forgot large parts of it.) And in fact I’m also confused the About channels mention in the README, and the Channels documentation in the manual. I’m trying to explain this here in case precise feedback can be useful to improve the documentation.
Your explanation, here and in the manual, focuses on differences in use-cases between Git branches and channels. This is confusing because (1) the question is rather “how can we do branches in Pijul?”, not “what are fine-grained differences between what you do and git branches?”, and because (2) the answer goes into technical subtleties or advanced ideas rather quickly. At the end I’m not sure I have understood the answer (I guess I would if I was very familiar with Pijul already), and it’s not an answer to the question I had.
My main use of branches in git is to give names to separate repository states that correspond to separate development activities that should occur independently of each other. In one branch I’m trying to fix bug X, in another branch I’m working on implementing feature Y. Most branches end up with commits/changes that are badly written / buggy / etc., that I’m refining other time, and I don’t want to have them in the index when working on something else.
So this is my question: “how do you work on separate stuff in Pijul?”. I think this should be the main focus of your documentation.
There are other use-cases for branches in git. Typically “I’m about to start a difficult rebase/merge/whatever, let me create a new branch foo-old
to have a name for what I had before in case something blows up.”, and sometimes “I want to hand-pick only commit X, Y and Z of my current work, and be able to show them separately easily”. I agree that most of those uses are not necessary in patch-based systems, but I think you shouldn’t spend too much answer surface to point that out. (And I mostly forget about those uses of branches, because they are ugly so I don’t generally think about them. So having them vaguely mentioned in the documentation was more distracting than helfpul.)
To summarize:
The Pijul documentation writes: “However, channels are different from Git branches, and do not serve the same purpose.”. I think that if Channels are useful for the “good use case” given above, then we should instead consider than they basically serve the same purpose as branches.
Note: the darcs documentation has a better explanation of “The darcs way of (non-)branching”, showing in an example-based way a situation where talking about patches is enough. I think it’s close to what you describe in your documentation, but it is much clearer because it is example-based. I still think that they spend too much focus on this less-common aspect of branches.
Finally a question: with darcs, the obvious answer to “how to do branches?” is to simply use several clones of the same repository in different directories of my system, and push/pull between them. I assume that the same approach would work fine with pijul. What are the benefits of introducing channels as an extra concept? (I guess the data representation is more compact, the dcvs state is not duplicated in each directory?) It would be nice if the documentation of channels would answer this question.
So this is my question: “how do you work on separate stuff in Pijul?”
This all depends on what you want to do. The reason for your confusion could be because Pijul doesn’t enforce a strict workflow, you can do whatever you want.
If you want to fork, then so be it! If you’re like me and don’t want to worry about channels/branches, you can as well: I do all my reviewing work on main, and often write drafts of patches together in the same channel, even on independent features. Then, I can still push and pull whatever I want, without having to push the drafts.
However, if you prefer to use a more “traditional” Git-like way of working, you can too. The differences between these two ways isn’t a huge as a Git user would think.
Edit: I do use channels sometimes, for example when I want to expose two different versions of the same project, for example if that project depends on an fast-moving library, and I want to have a version compatible with the different versions of that library.
But if you work on different drafts of patches in the same channel, do they apply simultaneously in your working copy? I want to work on patches, but then leave them on the side and not have them in the working copy.
Re. channels: why not just copy the repository to different directories?
They do apply to the same working copy, and you may need multiple channels if you don’t want to do that.
Re. channels: why not just copy the repository to different directories?
Channel fork copies exactly 0 byte, copying a repository might copy gigabytes.
I use git and don’t typically branch that much. All a branch is a sequence of patches and since git lets me chop and slice patches in whatever way I want to, it seems like its usually overkill to create branches for things. Just makes your changes and build the patch chains you want when you want to, how you want to.
Then you might feel at home with Pijul. Pijul will give you the additional ability to push your patches independently from each other, potentially to different remote channels. Conversely, you’ll be able to cherry-pick for free (we simply call that “pulling” in Pijul).
They seem to not think it’s especially important; the suggested replacement of pijul diff –short works but is not documented for this.
A bit lower in the conversation the author agrees that a git status
command would be useful but they don’t have the time to work on it at the time of writing. My guess is that it is coming and the focus is on a working back-end at the moment.
Why another file manager? I wanted something simple and minimalistic, something to help me with faster navigation in the filesystem. A cd & ls replacement. So I build “llama”. It allows to quickly navigate with fuzzy searching, cd integration is quite simple. Opens vim right from llama. That’s it. Simple and dumb as a llama.
And I say, “Hey, Llama, hey, how about a little something, you know, for the effort, you know.” And he says, “Oh, uh, there won’t be any money, but when you die, on your deathbed, you will receive total consciousness.” So I got that goin’ for me, which is nice.
Love this. My only complaint is at the end where he stops to go work on “useful” stuff. Programming is an art form and there is value in creative endeavors like this whether or not someone makes money off itl
Reminds me of Cautionary Tales - Fritterin’ Away Genius.
I’m not saying Bryan Braun is going to be the next Claude Shannon, but there is a strong argument for the benefits of spending non-trivial amounts of time on fun things.
I’m saying that working on projects like this has value whether or not it helps you in more serious work.
For now. Who’s to say authorities won’t ask to scan photos for known terrorists, criminals, or political agitators? Or how long until Apple is “forced” to scan phones directly because pedophiles are avoiding the Apple Cloud?
That’s not how the technology works. It matches known images only. Like PhotoDNA—the original technology used for this purpose—it’s resistant to things like cropping, resizing, or re-encoding. But it can’t do things like facial recognition, it only detects a fixed set of images compiled by various authorities and tech companies. Read this technical summary from Apple.
FWIW, most major tech companies that host images have been matching against this shared database for years. Google Photos, Drive, Gmail, DropBox, OneDrive, and plenty more things commonly used on both iPhones and Androids. Apple is a decade late to this party—I’m genuinely surprised they haven’t been doing this already.
Apple does scan this when it hits iCloud.
The difference is now they’re making your phone scan it’s own photos before they ever leave your device.
Only if they are uploaded to iCloud. I understand it feels iffy that the matching against known bad hashes is done on-device, but this could be a way to implement E2E for iCloud Photos later on.
It’s pretty absurd you need to hire a programmer to develop a simple CRUD application.
In college, they tasked us with developing a backroom management solution for a forestry college. They were using Excel (not even Access!). One day, the instructor told us we weren’t the first - we were the second, maybe even third attempt at getting programmers to develop a solution for them. I suspect they’re still using Excel. Made me realize that maybe letting people develop their own solutions is a better and less paternalistic option if it works for them.
Related: I also wonder if tools like Dreamweaver or FrontPage were actually bad, or if they were considered a threat to low-tier web developers who develop sites for like, county fairs…
Made me realize that maybe letting people develop their own solutions is a better and less paternalistic option if it works for them.
There’s also a related problem that lots of people in our field underestimate: domain expertise. The key to writing a good backroom management solution for a forestry college is knowing how a forestry college runs.
Knowing how it runs will help you write a good management solution, even if all you got is Excel. Knowing everything there is to know about the proper way to do operator overloading in C++ won’t help you one bit with that. Obsessing about the details of handling inventory handouts right will make your program better, obsessing about non-type template parameters being auto
because that’s the right way to do it in C++-17 will be as useful as a hangover.
That explains a great deal about the popularity of tools like Excel, or Access, or – back in the day – Visual Basic, or Python. It takes far less time for someone who understands how forestry colleges run to figure out how to use Excel than it takes to teach self-absorbed programmers about how forestry colleges run, and about what’s important in a program and what isn’t.
It also doesn’t help that industry hiring practices tend to optimise for things other than how quickly you catch up on business logic. It blows my mind how many shops out there copycat Google and don’t hire young people with nursing and finance experience because they can’t do some stupid whiteboard puzzles, when they got customers in the finance and healthcare industry. If you’re doing CRM integration for the healthcare industry, one kid who worked six months in a hospital reception and can look up compile errors on Google can do more for your bottom line than ten wizkids who can recite a quicksort implementation from memory if you wake them up at 3 AM.
Speaking of Visual Basic:
I also wonder if tools like Dreamweaver or FrontPage were actually bad, or if they were considered a threat to low-tier web developers who develop sites for like, county fairs…
For all its flaws in terms of portability, hosting, and output quality, FrontPage was once the thing that made the difference between having a web presence and not having one, for a lot of small businesses that did not have the money or the technical expertise to navigate the contracting of development and hosting a web page in the middle of the Dotcom bubble. That alone made it an excellent tool, in a very different technical landscape from today (far less cross-browser portability, CSS was an even hotter pile of dung than it is today and so on and so forth).
Dreamweaver belonged in a sort of different league. I actually knew some professional designers who used it – the WYSIWYG mode was pretty cool for beginners but the way I remember, it was a pretty good tool all around. It became less relevant because the way people built websites changed.
It also doesn’t help that industry hiring practices tend to optimise for things other than how quickly you catch up on business logic. It blows my mind how many shops out there copycat Google and don’t hire young people with nursing and finance experience because they can’t do some stupid whiteboard puzzles, when they got customers in the finance and healthcare industry. If you’re doing CRM integration for the healthcare industry, one kid who worked six months in a hospital reception and can look up compile errors on Google can do more for your bottom line than ten wizkids who can recite a quicksort implementation from memory if you wake them up at 3 AM.
I’ve been meaning to write about my experiences in community college (it’s quite far removed from the average CS uni experience of a typical HN reader; my feelings are complex about it), but to contextualize:
Business analysts were expected to actually talk to the and refine the unquantifiable “we want this” into FR/NFRs for the programmers to implement.
Despite this, programmers weren’t expected to be unsociable bugmen in a basement that crank out code, but also be able to understand, refine requirements, and even talk to the clients themselves. Despite this, I didn’t see much action in this regard; we used the BAs as a proxy most of the time. They did their best.
I’m pretty torn on the matter of the BA + developer structure, too (which has somewhat of a history on this side of the pond, too, albeit through a whole different series of historical accidents).
I mean on the one hand it kind of makes sense on paper, and it has a certain “mathematical” appeal that one would be able to distill the essence of some complex business process into a purely mathematical, axiomatic model, that you can implement simply in terms of logical and mathematical statements.
At the same time, it’s very idealistic, and my limited experience in another field of engineering (electrical engineering) mostly tells me that this is not something worth pursuing.
For example, there is an expectation that an EE who’s working on a water pumping installation does have a basic understanding of how pumps work, how a pumping station operates and so on. Certainly not enough to make any kind of innovation on the pumping side of things, but enough to be able to design an electrical installation to power a pump. While it would technically be possible to get an “engineering analyst” to talk to the mechanical guys and turn their needs into requirements on the electrical side, the best-case scenario in this approach is that you get a highly bureaucratic team that basically designs two separate systems and needs like twenty revisions to get them hooked up to each other without at least one of them blowing up. In practice, it’s just a lot more expedient to teach people on both sides just enough about each others’ profession to get what the other guys are saying and refine specs together.
Obviously, you can’t just blindly apply this – you can’t put everything, from geography to mechanical engineering and from electrophysiology to particle physics in a CS curricula because you never know when your students are gonna need to work on GIS software, SCADA systems, medical devices or nuclear reactor control systems.
But it is a little ridiculous that, 80 years after the Z3, you need specially-trained computer programmers not just in order to push the field of computing forward (which is to be expected after all, it’s computer engineers that push computers forward, just like it’s electrical engineers who push electrical engines forward), but also to do even the most basic GIS integration work, for example. Or, as you said, to write a CRUD app. This isn’t even close to what people had in mind for computers 60 years ago. I’m convinced that, if someone from Edsger Dijkstra’s generation, or Dijkstra himself were to rise from the grave, he wouldn’t necessarily be very happy about how computer science has progressed in the last twenty years, but he’d be really disappointed with what the computer industry has been doing.
I mean, the biggest reason why salesforce is such a big deal is that you don’t need a programmer to get a CRUD app. They have templates covering nearly every business you could get into.
Their mascot literally used to be a guy whose entire body was the word “SOFTWARE” in a crossed-out red circle: https://www.gearscrm.com/wp-content/uploads/2019/01/Saasy1.jpg
FWIW I was neck deep in all of that back in the day. Nobody I knew looked down on Dreamweaver with any great enthusiasm, we viewed it as a specialised text editor that came with extra webby tools and a few quirks we didn’t like. And the problem with FrontPage was never that it lets noobs make web pages, just the absolute garbage output it generated that we would then have to work with.
just the absolute garbage output it generated that we would then have to work with.
Oh, yeah, the code it generated was atrocious, but the point was you never had to touch it. That was obviously never going to work for serious web design work, but not everyone needed or, for that matter, wanted any of that. FrontPage was remarkably popular at the university I went to for precisely this reason. Nobody in the EE department knew or cared to learn HTML, they just wanted something that made it easy to hyperlink things. Something that they could use sort of like they used Microsoft Word was even better.
Nobody I knew looked down on Dreamweaver with any great enthusiasm, we viewed it as a specialised text editor that came with extra webby tools and a few quirks we didn’t like.
I was definitely not neck-deep in it at the time Dreamweaver was being popular-ish so there’s not much I can add to that, other than that I think this was kind of the vibe I’d pick up from anyone who already knew a “serious” text editor. The guy who first showed me emacs would’ve probably said more or less the same thing. I suppose if all you’d seen before was Notepad, it would be easy to get hooked on it – otherwise there wasn’t much point to it.
That being said, there were a bunch of serious web shops that were using it in my area. I’d seen them around the turn of the century and it popped up in job ads every once in a while. Later, I started to sort of work for a computer magazine, and my colleagues who handled advertising knew that Macromedia had a small (and dwindling, but this was already 2002 or so…) customer base for Dreamweaver around here
re: “related” — hmm, these days services like Squarespace and Wix are not really considered bad, and it’s not uncommon for a web developer to say to a client they don’t want to work with: “your project is too simple for me, just do it yourself on Squarespace”. I wonder what changed. The tools have, for sure — these new service ones are more structured, more “blog engine” than “visual HTML editor”, but they still do have lots and lots of visual customization. But there must be something else?
I have found that things like Wix and Squarespace (or Wordpress) don’t scale very well. They work fine for a few pages that are manageable, but when you want to do more complex or repetitive things (generate a set of pages with minor differences in text or theme) they obstruct the user and cost a lot of time. A programmatic approach would then be a lot better, given that the domain is well mapped out.
This looks interesting, but I don’t have 90 minutes right now. I got to the part where he talks about programs being dynamic + temporal rather than static + spatial – could someone summarize the argument against ‘if’?
He mainly argues against ‘if’ statements that have been put in place to guard against a value that was entered into a function which caused the bug. If a function higher up in the call stack also has a condition that does a similar check, these two ‘if’ statements are now connected. This is a ‘wormhole if’.
This means that if you are changing code, you might have an undesired effect somewhere else if you do not know that there is an ‘if’ somewhere else that checks similar conditions.
IMO, the title is intentionally click-baity.
My take is that he’s mainly arguing against a specific case of if: The repeated if.
When you’re constantly checking for the same condition in different places, you’re asking for bugs. I think a common example (although not the only one by any means!) is feature flags.
Rather than if featureX == true
50 times in your code, find some way to control that condition in fewer places (ideally exactly one). A constructor function can be a good place, depending on your language/architecture.
Large “if trees” are also often cases where conditions are repeated (and what I think @kiwec is responding to in his comments), and refactoring those (i.e. with preconditions and early returns) can also reduce bugs.
It was so isolating that you didn’t talk to anyone for the entire day. The only interaction were occasional emails where you had to fight a difference of opinion and your weekly reminder to submit your timesheet. The only time I opened my mouth, was for the 30 second status update in the standup.
Interesting experience. Exactly the opposite for me. The more senior, the more meetings. Now I’m barely doing any programming anymore, I’m just talking to programmers and I worry that my skills deteriorate.
I woke up in a cold sweat earlier this week - I dreamed I was going to be fired because all I seem to do is talk to people, and I spend more time in Excel and PowerPoint than Emacs.
I still take some comfort in knowing I get placed on the yearly fire. Every year something important slips the planning schedule, or something horrifyingly essential breaks, and me and the other Powerpoint-slingers suddenly start writing code again.
I have had the same experience working at a company, though as a junior. 6 people in an office and only the sound of keyboards keys clicking. One colleague with a triple-spacebar-bashing tic…
The failure mostly seems to be in the support triage/escalation process failing to recognize this needs to be escalated.
One thing that probably doesn’t help is that there are a crapton of script kiddie “security researchers” sending you silly uninformed “security reports” in the hope of a free t-shirt or whatnot as a “bounty”. The recent Hacktober thing reminded me of that. Most of them were just spammy non-issues, but people keep asking for free shirts and the like anyway, typically sending many follow-ups.
The funniest I ever got was some person sending a YouTube video demonstrating some non-issue and typing in Notepad to explain (with much backspace use, no audio). The entire thing was about 6 minutes and absolutely hilarious. I wish I had saved it.
This is at my last job, which is just a small/medium B2B SaaS company you probably never heard of (yet large enough to attract these people, it seems). It’s not a well-known company or anything, and I must imagine Grindr gets much more of this spam.
Yes, there are a crapton of “script kiddies” out there, but then again there are some serious issues that can be found by script kiddies. Using GitTools for example. I, a script adult ;), have had quite a hard time reporting the repositories I found by running this script for domains in the .nl range. I found a common pattern of website-builders that were vulnerable(with database credentials in the repo…).
Yeah, but you (presumably) know how to interpret the results of the tools and generally know roughly what you’re talking about. The problem we had is that people would run some script and then send us the results as soon as it showed a “possible error”, but this was never really applicable to the situation (as we ran the tools ourselves, too).
For example the amount of emails we got with “ur site isnt having CSP header, plz sent shirt” for our public website is staggering.
Sad to hear about the price of the insurance. In the Netherlands I insured my 4500 euro bike for 450 euro for 3 years. E-bike insurance was even cheaper for some reason. I hope that as bicycle usage will go up, the insurance costs will go down.
If you own an e-bike, you’re likely to be older, not bike a lot, and have a house with the space to store a bike inside. If that 4500€ bike is not an e-bike, you might just be into competitive cycling. I expect insurance companies to do some “threat modeling” of their own :p
Yeah, insurance companies have their threat models figured out quite well I should hope ;). The trend in the Netherlands is currently moving towards E-bikes for everybody, except the competitive cyclists. My bike is bit overkill, but very nice for cycling holidays.
I had to search into this, but according to this article, mostly the e-bikes of younger people are being stolen, because they are not kept in sheds or other indoor areas (unlike e-bikes owned by people aged 55+). At the end they state that Shimano and other manufacturers are working on better locks for the batteries, because they are very valuable to thieves. The locks protecting the batteries are not certified at the moment and can be forced open relatively easily.
Most of the thieves that are caught come from the eastern part of Europe, and are part of, or steal for an organised crime organisation. The E-bike is relatively common and widespread in the Netherlands, so it’s also easy picking for the criminals. Once stolen, they are shipped over the border immediately.
To combat the theft and keep the insurance premium down, the insurance policy for e-bikes requires a GPS chip to be installed in the lock. This has resulted in a recovery rate of 60% of stolen bikes. An added benefit is that they sometimes find storage units with stolen bikes.
In the UK that is true too but you generally require specialist insurance if:
My household insurance specifically says my bikes are covered “whever they are, even if not at your house”
It do have a $1k deductible, though, so useless if your bike is cheap.
Looks cool, but the animations don’t really add any new information. They just seem to make the appearance of the diagram take a bit longer.
I initially thought so as well, however, it seems that perhaps the “animations” are rendering the output as it continues to be optimized. From the linked paper:
Moreover, though it may take time for diagrams to finish, the optimization-based approach provides near-instantaneous feedback for most diagrams by displaying the first few steps of the optimization process. These proto-diagrams typically provide enough information about the final layout that the user can halt optimization and continue iterating on the design.
I assembled a Keeb.io Iris rev. 4 with Cherry MX browns and blank key caps. Quite happy with it, but for the next one I will build something which has less height. The soldering was not a problem.
It is a cool challenge, but why focus on the language rather than the method? It is not like Rust has some special jigsaw solving capabilities that make it more useful than other languages.
For me the emphasis is on “bare”, in that the post does not use sophisticated libraries.
Seems like they’re only focusing on the method. Rust is only in the title in the first article, and the second one references the rayon library. There is practically nothing about Rust the language in any of this.