I’ve recorded one message for each temperature level the remote can set, the bits that change are, oddly(?) spread out across the message. H,M and ? changes with the clock in the remote. T changes with temperature. _ indicates the bit is the same in value in all messages so far.
If you collapse the message and just look at the bits that change with temperature setting changes, these are the bits for each temp level (in F, on the remote anyway):
If anyone has IR experience and/or likes puzzles, let me know if you see the pattern. I may just end up encoding these in a table, though that feels like cheating :p
The last four are interesting.. there’s definitely a grouping. A list of temp in C, minus 16, vs bit pattern seems to say they group incrementallyish. Wonder if it’s some sort of checksum or mask. As I’ve understood it IR protocols are prone to include redundancy or similar, given the shoddy transfer medium.
edit: staring at this, definitely something about the fraction right, it splits cleanly around .5; all the ones that are >.5 end in 11, and the ones that are <.5 end 00
The next two bits seem to be the bitwise not of the {uppermost bits, division by 4} of (°C - 16). You noticed that the last two bits follow (°C - ⌊°C⌋) > ½, but they also correspond to the 4th bit to the right of the decimal point.
All together in Python:
for f in range(61, 81):
print(f, end=' ')
c = (f + 0.5 - 32)*5/9
c = 0xff & int(c * 2**4)
c = list(map(int, bin(c)[2:].zfill(8)))
c[0:4] = c[0:4][::-1]
c[4:6] = [1 - b for b in c[2:4]]
c[6] = c[7]
print(''.join(map(str, c)))
I’m not entirely satisfied with this explanation, but it’s the best I can think of.
To know for certain you could always dump the code off of the remote and reverse engineer it. :P
The supported languages are Rust, C++, and Node. The website has an extremely tacky corporate feel with web fonts and stock photos. The toolkit heavily features animations and transparency, even beyond what has ill-advisedly been done in existing toolkits. The name even appears to be a reference to this misdesign.
@gempain this looks great. Frankly I’ve been wanting to write something like this for my own personal use for years.
It’s a bit of a bummer to see this under the BSL. However, it looks like the change date to Apache 2.0 is extremely close to release date. Can you talk a bit about what you’re trying to accomplish/what the thinking is there?
Thanks for the kind words ! This is a mistake from us, the change date should be in 2023, thanks for pointing this out. The reason behind choosing a BSL is that we’d like to finance the project by running a cloud version of Meli :) Basically you can do all you want with the tool, but we find this license fair enough so that a team like us can fully dedicate its time to making this project awesome while allowing others to use it for free.
Hey, really cool project! Honestly I was looking for something exactly like this anddd youve got packaged up in docker, saves me from having to do that!
Looking forward to using it.
Also, I personally think that the license choice is appropriate but considering how this is a new type of license do you think after some time your team could share some thoughts on how successful it was?
I don’t want to detract too far, but I think finding a sweet spot between user freedom (open source) and sustainability is very important. I’d rather a BSL project that is updated and improved over the years than an open source prototype that cant reliably be used because the developers had to move on to another project.
Thank you for this really nice comment ! I think exactly the same way as you ! I am aware that the debate over BSL is hot at the moment, and not every one will agree. We want to focus solely on developing this tool, and in this context, BSL makes sense as it gives the team a chance to monetize the project fairly. Everyone can use the tool for free, with no limitations, except the one mentioned in the license. It’s a fair model, which has been supported by the OS foundation even though they have not recognized it officially yet. We’re part of the people that believe in it and would love to see the community supporting this choice - it’s a good way to ensure healthy evolution of a platform like this. I think BSL makes sense for platforms, but for libraries, we always use MIT or GPL as I think it’s more suited. We’ll definitely blog about how it goes with this license, it’s a topic I hold at heart.
It’s not that simple from the BSL alone, but I missed the concrete parameters to this use of the license:
Additional Use Grant: You may make use of the Licensed Work, provided that you may not use the Licensed Work for a Static Site Hosting Service.
A “Static Site Hosting Service” is a commercial offering that allows third parties (other than your employees and contractors) to access the functionality of the Licensed Work by creating organizations, teams or sites controlled by such third parties.
For an unmodified BSL, any “production use” is outside the grant, but Meli grants “use” outside of running a service as specified (it appears they allow a non-commercial service though).
It may be a good idea to design a “composable” set of shared source licenses like Creative Commons did with their licenses for creative works. E.g. SourceAvailable-ProductionUse-NoSublicensing-NoCloud.
I don’t get the fully anti-JS sentiment. My own personal site has a sprinkling of JS so you can change the theme. It still has a really high score on the GTMetrix scanner they say to use. It also does nothing special other than capturing your t keypress and changing the theme in a cycle. SO DANGEROUS! https://nickjurista.com if you want me to steal your identity with my scary JavaScript.
“A slippery slope argument, in logic, critical thinking, political rhetoric, and caselaw, is often viewed as a logical fallacy in which a party asserts that a relatively small first step leads to a chain of related events culminating in some significant effect”
I’m not sure I understand haha. This fallacy is one I’ve always had trouble understanding.
The way I see it,it’s insufficient to say a happened so b must be the next step. So in this case b bad so we assume b is going to happen and therefore we should stop a.
From a logical point of view this is purely speculation? You can certainly speculate based on patterns, but I think it weakens the reasoning of the argument?
Users generally have no way of knowing that capturing t and switching the theme is the only thing your site does until it’s too late. Javascript isn’t vetted by distro maintainers either.
Your website could instead use a CSS media query to set the user-preferred theme: @media (prefers-color-scheme: dark) {...}. That way, users automatically get their preferred theme without having to execute untrusted code. They could also use the same browser/OS dark/light toggle that works on every other website instead of learning your site’s specific implementation.
I’m generally a big advocate of leaving presentation up to the user agent rather than the author when it’s possible; textual websites are the main example that comes to mind. There’s a previous discussion on an article I wrote on the subject; the comments had a lot of good points supporting and opposing the idea.
I think giving a user presentation options is fine for text, but I really don’t care how someone wants to look at my personal site. The themes I use are not light vs dark, they’re a variety of color schemes.
I also don’t care if someone disables JS in their browser. IMO it’s extremist behavior from a very, very small fraction of people. My site works without JS because it has nothing interactive anyway. But many sites I’ve worked on have been entirely JS-based (like live updating sports and customer dashboards). There’s anything inherently wrong with JS.
It’s great that your site works perfectly without JS; thanks for sticking to
progressive enhancement!
it’s extremist behavior…There’s anything inherently wrong with JS.
There are many good, non-“extremist” reasons why people don’t run JS:
They use Tor. Running JS on Tor is a bad idea because it opens the floodgates to
fingerprinting; frequent users generally set the security slider to “max” and
disable all scripting.
They have a high rate of packet loss and didn’t load anything besides HTML. This is
common if they’re in a train, on hotel wi-fi, using 3g, switching between networks,
etc.
They use a browser that you didn’t test with. Several article-extraction programs
and services don’t execute JS, for instance.
HTML, CSS, JS, Websockets, WebGL, Web Bluetooth API…there are a lot of features
that websites/webapps can use. Each feature you add costs a few edge-cases.
It’s unrealistic to expect devs like you and me to test their personal sites in
Netsurf, Dillo, braille readers, a browser that won’t be invented until the year
2040, e-readers, a Blackberry 10, and every other edge-case under the sun (I try to
anyway, but I don’t expect everyone to do the same). But the fewer features a site
uses, the more unknown edge-cases will be automatically supported. For example, my
site worked on lynx, links, elinks, w3m, Readability, Pocket, and even my own custom
hacky website-to-markdown-article script without any work because it just uses
simple HTML and (optional) CSS.
Not all websites are the same. Customer dashboards probably need to do more things
than our blogs. That’s why I like to stick to a rule of thumb: “meet your
requirements using the fewest features possible” (i.e., use progressive enhancement).
Use JS if it’s the only way to do so.
from a very, very small fraction of people.
I disagree with the mentality of ignoring small minorities; I try to cater to the
largest surface possible without compromising security, and regularly check my access
logs for new user-agents that I can test with. Everyone is part of a minority at some
point, and spending the extra effort to be inclusive is only going to make the Web a
better place.
I’d like to add that when making moral arguments, non-adherents tend to feel
attacked; please don’t feel like I’m “targeting” you in any way. Your site is great,
especially since it works without JS. Don’t let my subjective definition of “perfect”
be the enemy of “good”.
I don’t feel attacked at all. This is no different to me than someone who refuses to use an Android or iOS phone because they are afraid of being tracked. I see it as a lot of tinfoil with very little substance.
I personally do not see coding as a moral or political stance like so many do (especially here on Lobsters). I see it as a means to an end – and in my case it’s that I couldn’t decide on a theme and wanted to put an easter egg on my site.
For professional things, I tend to follow the 80/20 or 90/10 when approaching projects, catering to the low-hanging fruit to get the most stuff done. If I focus on all edge cases, I’ll never finish anything and it’s unreasonable to expect anyone to really do that.
Many sites don’t need to use JS and thus shouldn’t, but I think it’s throwing the baby out with the bathwater when people try to go “No JS” because of some sites doing stupid things to try to track users more or get more data out of them - or just turn their whole static site into a client-side app for no discernible reason. If you want total privacy, throw out your electronic devices altogether, start using cash only for purchases, get off the grid altogether.
JS itself is amazing and has propelled the web to incredible new uses. What I see from a lot of these No JS people is a really small segment of generally power users who either don’t like JS to begin with or are incredibly paranoid about being tracked for whatever reason. The average user, and most users by a large margin, are not concerned with running some arbitrary scripts (which the sandbox keeps getting tighter over time btw). This club feels like more virtue signaling than anything to me, and I think the No JS argument and “club” is silly altogether.
(Preface: nothing I have said so far applies to software that is, by necessity, a web app)
I […] wanted to put an easter egg on my site
Easter eggs are fine! Your site is great. You might want to change the trigger, though; people might expect something else to happen when they press “t”. Technically-inclined users are more likely than the average user to use custom keybinds.
If I focus on all edge cases, I’ll never finish anything and it’s unreasonable to expect anyone to really do that.
I agree that it’s ridiculous to expect people to test every edge case, which is why I advocated for simple sites that use simple technologies. With the “textual websites” I described in my article, you automatically get support for everything from braille readers to HTML-parsing article-extraction programs, without doing any work because you’re just using HTML with progressive, optional CSS/JS. I literally didn’t spend a single moment optimizing my site for w3m, lynx, links, elinks, IE, etc; when I tested my site in them, it just worked.
I think it’s throwing the baby out with the bathwater when people try to go “No JS” because of some sites doing stupid things to try to track users more or get more data out of them.
Nobody knows what lies on the other side of a hyperlink. We don’t know whether a site will do those bad things, so we disable scripts by default and enable them if we can be convinced. “Minimizing tracking and fingerprinting” and “living in a cabin in the woods” are worlds apart. I don’t think it’s healthy to assume that all privacy advocates are anarcho-primitivists.
Disabling scripting for privacy isn’t uncommon; it’s the norm among Tor users. These people aren’t unhinged as you portrayed; they’re…normal people who use Tor. Their use cases aren’t invalid.
JS itself is amazing and has propelled the web to incredible new uses
Apps are new. Blogs are not new. We should use the right tool for the right job. The mentality of “progress + innovation at full speed” is great when used in the right places, but I don’t think it belongs everywhere. We should be aware of the consequences of using tools and use them appropriately.
This club feels like more virtue signaling than anything to me, and I think the No JS argument and “club” is silly altogether.
It is virtue signalling. We believe in and follow a virtue, and signal it to others by joining this club. The existence of this “virtue-signalling platform” can help encourage this behavior; I know for a fact that the various “clubs” that cropped up in the past week have encouraged many site authors to optimize their websites so they could be included.
“Minimizing tracking and fingerprinting” and “living in a cabin in the woods” are worlds apart. I don’t think it’s healthy to assume that all privacy advocates are anarcho-primitivists.
I never said anything about living in a cabin in the woods or “anarcho-primitivists” - in fact this is the first time I’ve even heard the term.
You can minimize tracking and fingerprinting without disallowing JS altogether or starting a webring for sites without JS. That’s why I said “throwing the baby out with the bathwater.” If you remember, companies used to track with a pixel that folks would throw on their page which would then load from that domain and they would scrape whatever info they wanted on you. So when do we get to join the NoImages.club?
If you want total privacy, throw out your electronic devices altogether, start using cash only for purchases, get off the grid altogether.
I never said anything about living in a cabin in the woods or “anarcho-primitivists” - in fact this is the first time I’ve even heard the term.
Sorry, that’s the vibe I got from living “off the grid” without any electricity. Guess I was a bit hyperbolic.
If you remember, companies used to track with a pixel that folks would throw on their page which would then load from that domain and they would scrape whatever info they wanted on you.
That’s a good reason to test your site without images, in case users disable them. More on this below.
There’s a big difference between logging the loading of a tracking pixel and tracking the canvas fingerprint, window size, scrolling behavior, typing rate, mouse movements, rendering quirks, etc. Defending against every fingerprinting mechanism without blocking JS sounds harder than just blocking it by default. There’s a reason why the Tor browser’s secure mode disables scripting (among other things) and why almost every Tor user does this; they’re not all just collectively holding the same misconception. The loading of an image without JS isn’t enough to make you unique, but executing arbitrary scripts certainly is; the equivalent of a “read” receipt isn’t the same as fingerprinting.
So when do we get to join the NoImages.club?
Unless there isn’t an alternative, images should be treated like CSS: an optional progressive enhancement that may or may not get loaded. That’s why all images should have alt-text and pages should be tested with and without image loading. Writing good alt-text is important not just for screen/braille-readers, but also for people struggling with packet-loss or using unconventional clients.
IMO, text-oriented websites should only inline images (with alt-text, ofc) if they add to the content, and shouldn’t be used simply for decoration. I wouldn’t create a “no-images.club” because the potential for misuse isn’t nearly on the same level.
As a sysadmin, One of the biggest problems with doing the recommended course of action in this article is unaware and unintentional data loss.
So, lets say everything is in the DB. You put pics in there, use imagemagick to shronk and store those in the DB. Your DB is getting stupidly large, because it’s not meant for binary blob data.
So, you yoink out the binary data. Those images are now saved in /opt/datastore/…. instead of postgres DBdirs.
The problem, that I’ve come across, is even though the DB is backed up in various ways, the backing up of the image dir path may not be (or be at different/bad backup periods).
Another problem is syncing problems… So when you save picture/document/binary content, you’re goinig to hash it and save the hash in a table along with appropriate metadata. Then you save the binary content in /opt/datastore/$HASH . But… what happens if the hash doesn’t exist? How do you guarantee data consistence between the table and what’s on the FS? (hint: its really hard).
Another issue is if you’re guiding someone else to do backup/restore/migration techniques. If everything’s in the DB, then it’s a backup and restore on another destination. Then copy over the configs, and you’re pretty much done. It’s effectively a “1 stop shop”. However splitting up the data in different dir’s means you also add complexity to backing up and restoring. And missing something is pretty easy.
I’ve more been a fan of using 2 DBs: a regular DB and a documentDB. But there’s no one-size-fits-all.
I’m at the point in my career where my primary criterion for choosing tools and tech is minimizing [developer] suffering. Pain, frustration, annoyance, irritation, etc. Out of the major JS frontend frameworks, Vue gives me the least suffering. I just want as direct a path as possible from conceived thought to functional code that realizes it. Vue comes closest to that ideal path.
I’ve dabbled a bit in React – though not enough for me to make a fully fair and balanced comparison. React alone doesn’t seem to have enough “batteries included”. Vue has batteries included, but not too many batteries. Angular makes me want to stop being a full stack developer – but I’m willing to admit that perhaps it’s just the codebase(s) I’ve been subjected to that gives me that jaded view.
I’ll say this: The same awe and wonderment I felt from discovering the joy of Ruby I feel from discovering and using Vue. No other two technologies have given me that over the course of my entire career. The keyword list you could extract out of my resume makes a tag cloud as big as the next person’s. For all the other keywords in that list: I use them because I’ve had to (inherited codebase; departmental mandate; whatever). I use Ruby and Vue because I want to.
I realize what I’m writing here is somewhat anecdotal, intangible and immeasurable. So I propose this: Call to mind just how much frustration and angst you’ve felt when dealing with the tech in your current work. Hold that as a standard to compare against. Then go try Vue on a side project [of sufficient size so the results can’t be dismissed]. I’m pretty sure most people would find using Vue a superior experience.
After a little more than four years it’s my last week working remotely at $CURRENT_JOB before a week off and starting a new remote job in September. I fear leaving will be bittersweet, as I’ve enjoyed the job and got on well with my colleagues: will probably spend most of the week in goodbye calls! :-)
Hey, glad to hear that you enjoyed the work. I’m a bit curious about remote work and the atmosphere. I would like to look into remote opportunities as it would open up a few different industries/position that aren’t currently possible in my current lifestyle and location.
What kind of experience did you have with working remotely? I’m mostly wondering about the social aspect.
This was my first job working remotely full time, and it has been overwhelmingly positive. I’ve been here for over four years, and my next job is also fully remote. I don’t plan to ever work in an open plan office again :-)
I have a few caveats. For the first 3.5 years (until early this year) I rented a private 10m2 office about a 20 minutes drive away from my home, so I actually had a commute. The office was cheap, but it was difficult to find parking and I ended up spending more money on petrol and lunch (I wasn’t diciplined enough to make packed lunches); so since January 2019 I’ve been working from a room in my house.
In some ways working from home is not ideal. My wife home-educates our son, so I’m not normally alone in the house during work. This means I’m prone to distraction by family. My dream situation is a 5-10 minute walking commute to a private office: far enough that I’m not distracted by family and chores stuff, but close enough to walk home for lunch :-)
The lack of a commute means that I’m not finding myself in town, and thus can’t so easily meet up with friends in town after work. However, time not spent commuting is time I can use to go for a run before work, or practice my guitar. Since most of my friends themselves have families and we rarely manage to meet up, this works out in my favour.
Mind you, the above is only true for fully remote – where everyone (or close to it) is remote. I would personally not be interested in working at a job where only some people are remote, as you’re going to find yourself excluded from meetings and decision making. (“We couldn’t find a meeting room with remote facilities, soz.”, “We forgot to dial you in, sorry.”)
In my experience 100% remote work is more inclusive for people with families. At my previous job a group of the techies would often go to the pub after work. (Multiple times a week.) While everyone were welcome, I usually went home to my family. I don’t begrudge people going out and enjoying the company of their colleagues, but it felt like some important plans or decisions formed at the pub, and I felt excluded from that.
it felt like some important plans or decisions formed at the pub
Absolutely. People who are in the same room end up chatting and making decisions, even if not on the clock. It’s a strength of local work that has yet to be replicated in a remote setting.
Personal: Just moved! Setting up a robust network for my personal office and media centre in the living room (the girlfriend is very happy about this).
Work: Writing out a pros-cons list of rewriting this entire backend service as a monolith instead of 45 microservices (AWS Lambda’s specifically) tied to API Gateway. This comes from the fact that Node.JS 6.10 is deprecated on AWS Lambda and we need to upgrade to implement new features. If anyone has done something like this and has some advice please feel free to share it with me! It’ll be greatly appreciated.
The runtime is what made us realize we need to upgrade each Lambda. There are some other concerning factors like there is no way to deploy the entire stack on our local development machines to debug or develop new features. The runtime also plays a factor in fixing existing bugs as we aren’t clear on how to test and debug this one Lambda without updating its runtime. Along with all of this, of course, is the fact that my colleague and I are unfamiliar with the stack as it was two previous employees who left the company that built and designed it.
Its a single endpoint RESTful service that requires the maintenance of about 40 Lambdas behind API Gateway. It uses a ridiculous amount of duplicated code (I’ve heard of Layers, not sure how it actually works though), and these are big heavy functions (often many objects and methods implemented within each Lambda) that don’t seem to fit the use-case of the Lambda services.
So the question is, do we rebuild the platform on a new runtime with the same infrastructure and the same roadblocks, or do we rebuild the platform in a way that is comfortable to me and my colleague - as a monolithic application in an EC2 instance proxied into by the API Gateway.
I plan on doing a bit of sailing and start a project best described as: “just-in-time cloud infrastructure for data pipelines” (working name Jittaform). The ideas is to combine Docker/Docker-compose and Terraform into a Gitlab-CI like yaml file that will allow someone to spin servers up and down based on the needs of the tasks they define.
Take a look at argoproj. It’s a workflow system built around ephemeral containers and kubernetes. It may be useful to see how they implement workflows.
Thanks so much for the link! Argoproj seems very interesting! They have the workflow system already worked out.
I opted for Docker-compose and Docker Swarm because I have more experience with these technologies, but this looks like a good choice as well. I will play around with it to see if Terraform can be tacked on.
I once had a thought that it would be interesting to see a treatment of category theory in APL. Then i went back and read Iverson, and he pretty much had it already. That guy was so far ahead, the rest of computing still hasn’t caught up.
Totally. I ended up just linking the domain to my blog for now and utilizing that. This way I have something with relatively active content and a way to connect with me.
I’m going to be actively designing a full site in Sketch over the course of the next few weeks. My main concern though for this round was just having more content and a better way to connect with me.
Glad to hear you are doing what you enjoy and getting paid for it. The industry definitely needs more dedicated, self-directed people.
I’m curious to hear about your experience with writing for DigitalOcean? Have you just applied and got accepted? What is the process like?
You should learn about algorithms and datastructures though: it’s not just helpful, it’s also a very interesting subject.
And so are types: a good type system is really about expresiveness with automated checking rather than just the compiler preventing you from doing things it thinks make no sense.
Languages with modern type systems like PureScript or Elm are gaining popularity in the JS ecosystem, and some classic languages can be cross-compiled to it: Facebook Messenger is written in OCaml for example.
For a teaser: in https://perl.plover.com/yak/typing/notes.html there’s a live example of type checker finding an infinite loop.
Regarding DigitalOcean, they found some articles I wrote on my own site and asked me if I’d be interested in writing for them, so I agreed to do the “How to Code in JavaScript” tutorial series. As for algorithms and data structures, (and types) I am interested in learning them; they’re on the top of my learning in public list. I’ve been playing around with TypeScript as well, to get familiar with more strict typing.
I started learning typed languages with Elm and I found the experience really nice, even though I eventually “outgrew” it, mostly due to lack of interesting features and feeling like it’s kind of a dead end.
OCaml/ReasonML is great but not very beginner-friendly atm, though the community is growing and doing a great job at making it more approachable. TypeScript also has a lot of good points, but I don’t think one can use it to its full potential without “being immersed” in a fully typed language first.
So I definitely recommend Elm if you wanna get started with types and all that, but I wouldn’t invest on it for the long term.
Go doesn’t really have much of a type system… If you’re mainly looking to learn Go by all means do it, but to really learn typed languages I suggest you look elsewhere.
Go types aren’t really anything to write home about, and there are special rules for builtins that you don’t get to use (and therefore learn about) properly. If you’re comfortable with JS, I’d suggest TypeScript. You can add it one file at a time, and it’s easy to tell it to shut up and trust your word if you’re doing something magic it can’t grok, or you’re just mid-way through converting a module.
I agree on both counts. Doesn’t make Go a good place to start getting a handle on typed systems in general though, IMO. And to me, the most interesting thing about Go interfaces is the “static duck typing” aspect, which is exactly how TypeScript interfaces work, and is why you can migrate to TS a module at a time, using module-private typedefs even for inter-module communications. TS will happily let you do that, and still tell you when your structures aren’t compatible, which means you don’t have to have a single source-of-truth for any system-wide object shapes until you’re ready to do so.
A quick example. In Go, the usual approach to functions that may fail is to return a tuple of error value that can be nil and an actual value.
err, value := doThings();
if err != nil {
...
}
The caller must always remember to actually check the error.
In ML/Haskell, where you can have a “sum type” that can have multiple variants carrying different values, there are types like this:
(* 'a and 'b are placeholders for "any type" *)
type ('a, 'b) result = Ok of 'a | Error of 'b
You cannot unwrap an (Ok x) without explicitly handling the other case without getting glaring compiler warnings (that you can make errors):
let result = do_things () in
match result with
| Ok value -> do_other_things value
| Error msg -> log_err "Bad things happened"; do_other_things some_default
If you have a bunch of functions that may return errors, you can sequence them with a simple operator that takes a value and a function (the function must also return the Ok|Error type). If the value is Error is just returns it, but if it’s (Ok x) then it returns (f x).
let (>>=) x f =
match x with
| Error _ as e -> e
| Ok value -> f x
(* if at least one of these functions returns Error, res will be Error *)
let res = do_risky_thing foo >>= do_other_thing >>= do_one_more_thing
(Note: OCaml also has exceptions and they are used just as widely as this approach, Haskell uses this approach exclusively)
“OCaml from the very beginning” is a very nice book, and it’s not very expensive.
For the non-strict way, http://haskellbook.com is good, but expensive.
If you want something free of charge, it’s a more difficult question. Stay away from “Learn You a Haskell”, it’s a very bad pedagody.
Robert Harper’s “Programming in StandardML” (https://www.cs.cmu.edu/~rwh/isml/) is great and free, but it’s in, well, StandardML, the Latin of typed functional languages. You will have no problem switching to another syntax from it, but while you are in the SML land, you are on your own, with essentially no libs, no tooling, and not many people to ask for help.
Tutorials on https://ocaml.org are good but not very extensive.
At work I’m adding code to embed some extra meta-data in the portable globe files we “cut” from Google Earth. Currently when our Android plugin imports a portable file it has to scan all of the imagery and terrain data packets looking for some metadata like boundaries, min/max zoom levels, and a few other bits of information. Needless to say, “walking” the whole file structure is a performance problem with larger files, so our solution is to pre-compute the metadata we need and embed that in the file at creation time. It’s been nice to actually write code after weeks of manual testing and bug fixing, and I’m learning a lot about the portable cut file format and the process for creating them, which is a lot of fun.
It’s a day off today, though, so I’m going for a longer bike ride, and then incorporating some recent upstream changes to Blend2D into my Common Lisp binding. Right now I can’t read or write files, so it’s critical to get the new APIs included. The downside to writing a binding to a pre-release library is that I seem to spend more time tracking changes and tweaking/fixing the binding than I do using it, but the recent changes are an improvement, and I’m learning the API as it evolves, so it’s really not too bad.
For the rest of the week, I’d like to get back to the animations I was creating with the Blend2D bindings. And I have some bike maintenance to do once some parts arrive.
I’ve also started going for daily walks with a new neighbor friend of mine. One downside of working from home is that it’s easy to stay in, and it’s nice to have the external motivation to go out and talk to somebody face to face.
I’m trying to pick up common lisp during free time. I’ve realized that working on bindings is something I may have to do often if I try making a more practical application.
Are your bindings available online? If so, I’d love to learn from them
You may be surprised - after about 5 years of CL I’ve only had to create my own bindings a couple of times. The cl-autowrap and CFFI packages make it relatively simple to write C bindings. C++ is a different story, and I’m not sure there’s a great way to create those, especially for template heavy libraries.
These Blend2D bindings are using cl-autowrap, which uses c2ffi and clang to generate bindings from a header file. The downside of cl-autowrap is that it creates (and exports) bindings to every function and type found while parsing a header file, including system functions and types.
I didn’t want to export all of those from the blend2d package, so I created a nested package named blend2d.ll which uses cl-autowrap and exports everything, and then another package, blend2d, which selectively exports functions from blend2d.ll. There may be a better way to do it, but this is working okay for now.
An alternative to cl-autowrap is to use CFFI directly. It’s easier to use for small libraries, or situations where you only need a handful of foreign functions. I used this technique a while back to write an incomplete binding to ZBar, a barcode scanning library.
I’m writing my blog with create-react-app. So far I’m having a lot of fun working out the subtleties of the design and what I want my blog to be like. Also brainstormed a bunch of ideas for blog posts.
I looked through the readme of the project and I just wanted to say I really appreciate that you had an entire section dedicated to justifying why the library is being written and a comparison to existing libraries.
Trying to juggle multiple projects while squeezing out a minimum viable Mitogen release supporting Ansible 2.8. Azure Pipelines is being an asshole, so I’ve downed tools for the evening
Outside work, I’m trying to reverse-engineer the IR protocol for an AC. Y’all are welcome to help!
Hitting “power” on the remote yields a message like this:
I’ve recorded one message for each temperature level the remote can set, the bits that change are, oddly(?) spread out across the message.
H
,M
and?
changes with the clock in the remote.T
changes with temperature._
indicates the bit is the same in value in all messages so far.If you collapse the message and just look at the bits that change with temperature setting changes, these are the bits for each temp level (in F, on the remote anyway):
If anyone has IR experience and/or likes puzzles, let me know if you see the pattern. I may just end up encoding these in a table, though that feels like cheating :p
I can help with the first four bits: It adds 0.5°F, then converts it to °C, subtracts 16°C, and the bits are in reverse order.
In Python:
Oof! Awesome.
The last four are interesting.. there’s definitely a grouping. A list of temp in C, minus 16, vs bit pattern seems to say they group incrementallyish. Wonder if it’s some sort of checksum or mask. As I’ve understood it IR protocols are prone to include redundancy or similar, given the shoddy transfer medium.
edit: staring at this, definitely something about the fraction right, it splits cleanly around
.5
; all the ones that are>.5
end in11
, and the ones that are<.5
end00
The next two bits seem to be the bitwise not of the {uppermost bits, division by 4} of (°C - 16). You noticed that the last two bits follow (°C - ⌊°C⌋) > ½, but they also correspond to the 4th bit to the right of the decimal point.
All together in Python:
I’m not entirely satisfied with this explanation, but it’s the best I can think of.
To know for certain you could always dump the code off of the remote and reverse engineer it. :P
Checkout the content at analysir. They sell a product but Chris is also an expert and his blog posts may be helpful.
https://www.analysir.com/blog/tag/decoding/
Oh wow, that’s super in depth, thanks for this!
I despise this.
Care to elaborate?
The supported languages are Rust, C++, and Node. The website has an extremely tacky corporate feel with web fonts and stock photos. The toolkit heavily features animations and transparency, even beyond what has ill-advisedly been done in existing toolkits. The name even appears to be a reference to this misdesign.
@gempain this looks great. Frankly I’ve been wanting to write something like this for my own personal use for years.
It’s a bit of a bummer to see this under the BSL. However, it looks like the change date to Apache 2.0 is extremely close to release date. Can you talk a bit about what you’re trying to accomplish/what the thinking is there?
Thanks for the kind words ! This is a mistake from us, the change date should be in 2023, thanks for pointing this out. The reason behind choosing a BSL is that we’d like to finance the project by running a cloud version of Meli :) Basically you can do all you want with the tool, but we find this license fair enough so that a team like us can fully dedicate its time to making this project awesome while allowing others to use it for free.
Hey, really cool project! Honestly I was looking for something exactly like this anddd youve got packaged up in docker, saves me from having to do that!
Looking forward to using it.
Also, I personally think that the license choice is appropriate but considering how this is a new type of license do you think after some time your team could share some thoughts on how successful it was?
I don’t want to detract too far, but I think finding a sweet spot between user freedom (open source) and sustainability is very important. I’d rather a BSL project that is updated and improved over the years than an open source prototype that cant reliably be used because the developers had to move on to another project.
Thank you for this really nice comment ! I think exactly the same way as you ! I am aware that the debate over BSL is hot at the moment, and not every one will agree. We want to focus solely on developing this tool, and in this context, BSL makes sense as it gives the team a chance to monetize the project fairly. Everyone can use the tool for free, with no limitations, except the one mentioned in the license. It’s a fair model, which has been supported by the OS foundation even though they have not recognized it officially yet. We’re part of the people that believe in it and would love to see the community supporting this choice - it’s a good way to ensure healthy evolution of a platform like this. I think BSL makes sense for platforms, but for libraries, we always use MIT or GPL as I think it’s more suited. We’ll definitely blog about how it goes with this license, it’s a topic I hold at heart.
From the license:
Am I missing something, or can one do everything except use it?
BSL: You can self host or pay for hosting but not sell hosting to others.
It’s not that simple from the BSL alone, but I missed the concrete parameters to this use of the license:
For an unmodified BSL, any “production use” is outside the grant, but Meli grants “use” outside of running a service as specified (it appears they allow a non-commercial service though).
It may be a good idea to design a “composable” set of shared source licenses like Creative Commons did with their licenses for creative works. E.g. SourceAvailable-ProductionUse-NoSublicensing-NoCloud.
Thanks for clarifying. This does make sense!
Is this the new version of a webring?
I don’t get the fully anti-JS sentiment. My own personal site has a sprinkling of JS so you can change the theme. It still has a really high score on the GTMetrix scanner they say to use. It also does nothing special other than capturing your
t
keypress and changing the theme in a cycle. SO DANGEROUS! https://nickjurista.com if you want me to steal your identity with my scary JavaScript.Sure today you only capture
t
, tomorrow you capture all scroll events, then you prevent device-native shortcuts.This seems to be the slippery slope fallacy.
“A slippery slope argument, in logic, critical thinking, political rhetoric, and caselaw, is often viewed as a logical fallacy in which a party asserts that a relatively small first step leads to a chain of related events culminating in some significant effect”
Well, the slippery slope fallacy isn’t a fallacy if it actually is a slippery slope… Question is whether it is here.
I’m not sure I understand haha. This fallacy is one I’ve always had trouble understanding.
The way I see it,it’s insufficient to say a happened so b must be the next step. So in this case b bad so we assume b is going to happen and therefore we should stop a.
From a logical point of view this is purely speculation? You can certainly speculate based on patterns, but I think it weakens the reasoning of the argument?
Let me know what your thoughts are
Hey, stop giving away my identity theft secrets!
Users generally have no way of knowing that capturing
t
and switching the theme is the only thing your site does until it’s too late. Javascript isn’t vetted by distro maintainers either.Your website could instead use a CSS media query to set the user-preferred theme:
@media (prefers-color-scheme: dark) {...}
. That way, users automatically get their preferred theme without having to execute untrusted code. They could also use the same browser/OS dark/light toggle that works on every other website instead of learning your site’s specific implementation.I’m generally a big advocate of leaving presentation up to the user agent rather than the author when it’s possible; textual websites are the main example that comes to mind. There’s a previous discussion on an article I wrote on the subject; the comments had a lot of good points supporting and opposing the idea.
I think giving a user presentation options is fine for text, but I really don’t care how someone wants to look at my personal site. The themes I use are not light vs dark, they’re a variety of color schemes.
I also don’t care if someone disables JS in their browser. IMO it’s extremist behavior from a very, very small fraction of people. My site works without JS because it has nothing interactive anyway. But many sites I’ve worked on have been entirely JS-based (like live updating sports and customer dashboards). There’s anything inherently wrong with JS.
It’s great that your site works perfectly without JS; thanks for sticking to progressive enhancement!
There are many good, non-“extremist” reasons why people don’t run JS:
HTML, CSS, JS, Websockets, WebGL, Web Bluetooth API…there are a lot of features that websites/webapps can use. Each feature you add costs a few edge-cases.
It’s unrealistic to expect devs like you and me to test their personal sites in Netsurf, Dillo, braille readers, a browser that won’t be invented until the year 2040, e-readers, a Blackberry 10, and every other edge-case under the sun (I try to anyway, but I don’t expect everyone to do the same). But the fewer features a site uses, the more unknown edge-cases will be automatically supported. For example, my site worked on lynx, links, elinks, w3m, Readability, Pocket, and even my own custom hacky website-to-markdown-article script without any work because it just uses simple HTML and (optional) CSS.
Not all websites are the same. Customer dashboards probably need to do more things than our blogs. That’s why I like to stick to a rule of thumb: “meet your requirements using the fewest features possible” (i.e., use progressive enhancement). Use JS if it’s the only way to do so.
I disagree with the mentality of ignoring small minorities; I try to cater to the largest surface possible without compromising security, and regularly check my access logs for new user-agents that I can test with. Everyone is part of a minority at some point, and spending the extra effort to be inclusive is only going to make the Web a better place.
I’d like to add that when making moral arguments, non-adherents tend to feel attacked; please don’t feel like I’m “targeting” you in any way. Your site is great, especially since it works without JS. Don’t let my subjective definition of “perfect” be the enemy of “good”.
I don’t feel attacked at all. This is no different to me than someone who refuses to use an Android or iOS phone because they are afraid of being tracked. I see it as a lot of tinfoil with very little substance.
I personally do not see coding as a moral or political stance like so many do (especially here on Lobsters). I see it as a means to an end – and in my case it’s that I couldn’t decide on a theme and wanted to put an easter egg on my site.
For professional things, I tend to follow the 80/20 or 90/10 when approaching projects, catering to the low-hanging fruit to get the most stuff done. If I focus on all edge cases, I’ll never finish anything and it’s unreasonable to expect anyone to really do that.
Many sites don’t need to use JS and thus shouldn’t, but I think it’s throwing the baby out with the bathwater when people try to go “No JS” because of some sites doing stupid things to try to track users more or get more data out of them - or just turn their whole static site into a client-side app for no discernible reason. If you want total privacy, throw out your electronic devices altogether, start using cash only for purchases, get off the grid altogether.
JS itself is amazing and has propelled the web to incredible new uses. What I see from a lot of these No JS people is a really small segment of generally power users who either don’t like JS to begin with or are incredibly paranoid about being tracked for whatever reason. The average user, and most users by a large margin, are not concerned with running some arbitrary scripts (which the sandbox keeps getting tighter over time btw). This club feels like more virtue signaling than anything to me, and I think the No JS argument and “club” is silly altogether.
(Preface: nothing I have said so far applies to software that is, by necessity, a web app)
Easter eggs are fine! Your site is great. You might want to change the trigger, though; people might expect something else to happen when they press “t”. Technically-inclined users are more likely than the average user to use custom keybinds.
I agree that it’s ridiculous to expect people to test every edge case, which is why I advocated for simple sites that use simple technologies. With the “textual websites” I described in my article, you automatically get support for everything from braille readers to HTML-parsing article-extraction programs, without doing any work because you’re just using HTML with progressive, optional CSS/JS. I literally didn’t spend a single moment optimizing my site for w3m, lynx, links, elinks, IE, etc; when I tested my site in them, it just worked.
Nobody knows what lies on the other side of a hyperlink. We don’t know whether a site will do those bad things, so we disable scripts by default and enable them if we can be convinced. “Minimizing tracking and fingerprinting” and “living in a cabin in the woods” are worlds apart. I don’t think it’s healthy to assume that all privacy advocates are anarcho-primitivists.
Disabling scripting for privacy isn’t uncommon; it’s the norm among Tor users. These people aren’t unhinged as you portrayed; they’re…normal people who use Tor. Their use cases aren’t invalid.
Apps are new. Blogs are not new. We should use the right tool for the right job. The mentality of “progress + innovation at full speed” is great when used in the right places, but I don’t think it belongs everywhere. We should be aware of the consequences of using tools and use them appropriately.
It is virtue signalling. We believe in and follow a virtue, and signal it to others by joining this club. The existence of this “virtue-signalling platform” can help encourage this behavior; I know for a fact that the various “clubs” that cropped up in the past week have encouraged many site authors to optimize their websites so they could be included.
I never said anything about living in a cabin in the woods or “anarcho-primitivists” - in fact this is the first time I’ve even heard the term.
You can minimize tracking and fingerprinting without disallowing JS altogether or starting a webring for sites without JS. That’s why I said “throwing the baby out with the bathwater.” If you remember, companies used to track with a pixel that folks would throw on their page which would then load from that domain and they would scrape whatever info they wanted on you. So when do we get to join the NoImages.club?
Sorry, that’s the vibe I got from living “off the grid” without any electricity. Guess I was a bit hyperbolic.
That’s a good reason to test your site without images, in case users disable them. More on this below.
There’s a big difference between logging the loading of a tracking pixel and tracking the canvas fingerprint, window size, scrolling behavior, typing rate, mouse movements, rendering quirks, etc. Defending against every fingerprinting mechanism without blocking JS sounds harder than just blocking it by default. There’s a reason why the Tor browser’s secure mode disables scripting (among other things) and why almost every Tor user does this; they’re not all just collectively holding the same misconception. The loading of an image without JS isn’t enough to make you unique, but executing arbitrary scripts certainly is; the equivalent of a “read” receipt isn’t the same as fingerprinting.
Unless there isn’t an alternative, images should be treated like CSS: an optional progressive enhancement that may or may not get loaded. That’s why all images should have alt-text and pages should be tested with and without image loading. Writing good alt-text is important not just for screen/braille-readers, but also for people struggling with packet-loss or using unconventional clients.
IMO, text-oriented websites should only inline images (with alt-text, ofc) if they add to the content, and shouldn’t be used simply for decoration. I wouldn’t create a “no-images.club” because the potential for misuse isn’t nearly on the same level.
As a sysadmin, One of the biggest problems with doing the recommended course of action in this article is unaware and unintentional data loss.
So, lets say everything is in the DB. You put pics in there, use imagemagick to shronk and store those in the DB. Your DB is getting stupidly large, because it’s not meant for binary blob data.
So, you yoink out the binary data. Those images are now saved in /opt/datastore/…. instead of postgres DBdirs.
The problem, that I’ve come across, is even though the DB is backed up in various ways, the backing up of the image dir path may not be (or be at different/bad backup periods).
Another problem is syncing problems… So when you save picture/document/binary content, you’re goinig to hash it and save the hash in a table along with appropriate metadata. Then you save the binary content in /opt/datastore/$HASH . But… what happens if the hash doesn’t exist? How do you guarantee data consistence between the table and what’s on the FS? (hint: its really hard).
Another issue is if you’re guiding someone else to do backup/restore/migration techniques. If everything’s in the DB, then it’s a backup and restore on another destination. Then copy over the configs, and you’re pretty much done. It’s effectively a “1 stop shop”. However splitting up the data in different dir’s means you also add complexity to backing up and restoring. And missing something is pretty easy.
I’ve more been a fan of using 2 DBs: a regular DB and a documentDB. But there’s no one-size-fits-all.
Pretty sure this was covered in the section talking about how we’ve lost transactions once the database is no longer the sole data system.
By far, I prioritize developer UX. Minimize developer suffering; maximize developer happiness. To that end, my current stack preferences are:
I’m trying to decide between react and vue for a side project. I’m more familiar with react so leaning towards that.
Could you provide some info on what makes vue your choice? I’d just like to make a more informed decision so I’m trying to learn more about vue
I’m at the point in my career where my primary criterion for choosing tools and tech is minimizing [developer] suffering. Pain, frustration, annoyance, irritation, etc. Out of the major JS frontend frameworks, Vue gives me the least suffering. I just want as direct a path as possible from conceived thought to functional code that realizes it. Vue comes closest to that ideal path.
I’ve dabbled a bit in React – though not enough for me to make a fully fair and balanced comparison. React alone doesn’t seem to have enough “batteries included”. Vue has batteries included, but not too many batteries. Angular makes me want to stop being a full stack developer – but I’m willing to admit that perhaps it’s just the codebase(s) I’ve been subjected to that gives me that jaded view.
I’ll say this: The same awe and wonderment I felt from discovering the joy of Ruby I feel from discovering and using Vue. No other two technologies have given me that over the course of my entire career. The keyword list you could extract out of my resume makes a tag cloud as big as the next person’s. For all the other keywords in that list: I use them because I’ve had to (inherited codebase; departmental mandate; whatever). I use Ruby and Vue because I want to.
I realize what I’m writing here is somewhat anecdotal, intangible and immeasurable. So I propose this: Call to mind just how much frustration and angst you’ve felt when dealing with the tech in your current work. Hold that as a standard to compare against. Then go try Vue on a side project [of sufficient size so the results can’t be dismissed]. I’m pretty sure most people would find using Vue a superior experience.
Really cool!
After a little more than four years it’s my last week working remotely at $CURRENT_JOB before a week off and starting a new remote job in September. I fear leaving will be bittersweet, as I’ve enjoyed the job and got on well with my colleagues: will probably spend most of the week in goodbye calls! :-)
Hey, glad to hear that you enjoyed the work. I’m a bit curious about remote work and the atmosphere. I would like to look into remote opportunities as it would open up a few different industries/position that aren’t currently possible in my current lifestyle and location.
What kind of experience did you have with working remotely? I’m mostly wondering about the social aspect.
This was my first job working remotely full time, and it has been overwhelmingly positive. I’ve been here for over four years, and my next job is also fully remote. I don’t plan to ever work in an open plan office again :-)
I have a few caveats. For the first 3.5 years (until early this year) I rented a private 10m2 office about a 20 minutes drive away from my home, so I actually had a commute. The office was cheap, but it was difficult to find parking and I ended up spending more money on petrol and lunch (I wasn’t diciplined enough to make packed lunches); so since January 2019 I’ve been working from a room in my house.
In some ways working from home is not ideal. My wife home-educates our son, so I’m not normally alone in the house during work. This means I’m prone to distraction by family. My dream situation is a 5-10 minute walking commute to a private office: far enough that I’m not distracted by family and chores stuff, but close enough to walk home for lunch :-)
The lack of a commute means that I’m not finding myself in town, and thus can’t so easily meet up with friends in town after work. However, time not spent commuting is time I can use to go for a run before work, or practice my guitar. Since most of my friends themselves have families and we rarely manage to meet up, this works out in my favour.
Mind you, the above is only true for fully remote – where everyone (or close to it) is remote. I would personally not be interested in working at a job where only some people are remote, as you’re going to find yourself excluded from meetings and decision making. (“We couldn’t find a meeting room with remote facilities, soz.”, “We forgot to dial you in, sorry.”)
In my experience 100% remote work is more inclusive for people with families. At my previous job a group of the techies would often go to the pub after work. (Multiple times a week.) While everyone were welcome, I usually went home to my family. I don’t begrudge people going out and enjoying the company of their colleagues, but it felt like some important plans or decisions formed at the pub, and I felt excluded from that.
Absolutely. People who are in the same room end up chatting and making decisions, even if not on the clock. It’s a strength of local work that has yet to be replicated in a remote setting.
Personal: Just moved! Setting up a robust network for my personal office and media centre in the living room (the girlfriend is very happy about this).
Work: Writing out a pros-cons list of rewriting this entire backend service as a monolith instead of 45 microservices (AWS Lambda’s specifically) tied to API Gateway. This comes from the fact that Node.JS 6.10 is deprecated on AWS Lambda and we need to upgrade to implement new features. If anyone has done something like this and has some advice please feel free to share it with me! It’ll be greatly appreciated.
Just curious, is the primary reason for moving away from Lambda the nodejs runtime version?
The runtime is what made us realize we need to upgrade each Lambda. There are some other concerning factors like there is no way to deploy the entire stack on our local development machines to debug or develop new features. The runtime also plays a factor in fixing existing bugs as we aren’t clear on how to test and debug this one Lambda without updating its runtime. Along with all of this, of course, is the fact that my colleague and I are unfamiliar with the stack as it was two previous employees who left the company that built and designed it.
Its a single endpoint RESTful service that requires the maintenance of about 40 Lambdas behind API Gateway. It uses a ridiculous amount of duplicated code (I’ve heard of Layers, not sure how it actually works though), and these are big heavy functions (often many objects and methods implemented within each Lambda) that don’t seem to fit the use-case of the Lambda services.
So the question is, do we rebuild the platform on a new runtime with the same infrastructure and the same roadblocks, or do we rebuild the platform in a way that is comfortable to me and my colleague - as a monolithic application in an EC2 instance proxied into by the API Gateway.
We got our 200/200 Fibre installed today, so I would really like to get the ethernet run at least to my desk done.
Otherwise more work on Koalephant packages, and day to day client work.
Hey, just curious what is Koaelephant?
My company.
I plan on doing a bit of sailing and start a project best described as: “just-in-time cloud infrastructure for data pipelines” (working name Jittaform). The ideas is to combine Docker/Docker-compose and Terraform into a Gitlab-CI like yaml file that will allow someone to spin servers up and down based on the needs of the tasks they define.
I came across this: (https://www.capitalone.com/tech/cloud/just-in-time-cloud-infrastructure/)[Just-in-time-cloud infrastructure] but it seems to want to redefine everything that has already been built/developed, hence my decision to use Terraform and Docker. I am not an adept at Terraform, but this seems like a good reason to learn more about it.
Hey, that’s a really neat idea!
Take a look at argoproj. It’s a workflow system built around ephemeral containers and kubernetes. It may be useful to see how they implement workflows.
Edit: link https://www.google.com/url?q=https://github.com/argoproj/argo
Thanks so much for the link! Argoproj seems very interesting! They have the workflow system already worked out. I opted for Docker-compose and Docker Swarm because I have more experience with these technologies, but this looks like a good choice as well. I will play around with it to see if Terraform can be tacked on.
You’re welcome!
I once had a thought that it would be interesting to see a treatment of category theory in APL. Then i went back and read Iverson, and he pretty much had it already. That guy was so far ahead, the rest of computing still hasn’t caught up.
Hey, I’ve been trying to create a personal website as well.
Currently it’s just a blog without any real content.
rafikhan.io
I’d love to take a look at yours as well for inspiration.
Totally. I ended up just linking the domain to my blog for now and utilizing that. This way I have something with relatively active content and a way to connect with me.
I’m going to be actively designing a full site in Sketch over the course of the next few weeks. My main concern though for this round was just having more content and a better way to connect with me.
https://sneakycrow.dev/
Wow. You’ve accomplished and learnt so much! I’m definitely bookmarking this post and I hope to work through and learn a lot from your journey.
Thank you for posting. Very inspirational!
Glad to hear you are doing what you enjoy and getting paid for it. The industry definitely needs more dedicated, self-directed people.
I’m curious to hear about your experience with writing for DigitalOcean? Have you just applied and got accepted? What is the process like?
You should learn about algorithms and datastructures though: it’s not just helpful, it’s also a very interesting subject.
And so are types: a good type system is really about expresiveness with automated checking rather than just the compiler preventing you from doing things it thinks make no sense. Languages with modern type systems like PureScript or Elm are gaining popularity in the JS ecosystem, and some classic languages can be cross-compiled to it: Facebook Messenger is written in OCaml for example. For a teaser: in https://perl.plover.com/yak/typing/notes.html there’s a live example of type checker finding an infinite loop.
Regarding DigitalOcean, they found some articles I wrote on my own site and asked me if I’d be interested in writing for them, so I agreed to do the “How to Code in JavaScript” tutorial series. As for algorithms and data structures, (and types) I am interested in learning them; they’re on the top of my learning in public list. I’ve been playing around with TypeScript as well, to get familiar with more strict typing.
I started learning typed languages with Elm and I found the experience really nice, even though I eventually “outgrew” it, mostly due to lack of interesting features and feeling like it’s kind of a dead end.
OCaml/ReasonML is great but not very beginner-friendly atm, though the community is growing and doing a great job at making it more approachable. TypeScript also has a lot of good points, but I don’t think one can use it to its full potential without “being immersed” in a fully typed language first.
So I definitely recommend Elm if you wanna get started with types and all that, but I wouldn’t invest on it for the long term.
I was considering learning Go to get more familiar with a typed language.
Go doesn’t really have much of a type system… If you’re mainly looking to learn Go by all means do it, but to really learn typed languages I suggest you look elsewhere.
Go types aren’t really anything to write home about, and there are special rules for builtins that you don’t get to use (and therefore learn about) properly. If you’re comfortable with JS, I’d suggest TypeScript. You can add it one file at a time, and it’s easy to tell it to shut up and trust your word if you’re doing something magic it can’t grok, or you’re just mid-way through converting a module.
Go’s approach to interfaces are pretty uncommon and worth understanding.
I agree on both counts. Doesn’t make Go a good place to start getting a handle on typed systems in general though, IMO. And to me, the most interesting thing about Go interfaces is the “static duck typing” aspect, which is exactly how TypeScript interfaces work, and is why you can migrate to TS a module at a time, using module-private typedefs even for inter-module communications. TS will happily let you do that, and still tell you when your structures aren’t compatible, which means you don’t have to have a single source-of-truth for any system-wide object shapes until you’re ready to do so.
A quick example. In Go, the usual approach to functions that may fail is to return a tuple of error value that can be nil and an actual value.
The caller must always remember to actually check the error.
In ML/Haskell, where you can have a “sum type” that can have multiple variants carrying different values, there are types like this:
You cannot unwrap an (Ok x) without explicitly handling the other case without getting glaring compiler warnings (that you can make errors):
If you have a bunch of functions that may return errors, you can sequence them with a simple operator that takes a value and a function (the function must also return the Ok|Error type). If the value is Error is just returns it, but if it’s (Ok x) then it returns (f x).
(Note: OCaml also has exceptions and they are used just as widely as this approach, Haskell uses this approach exclusively)
That learning in public list is really cool, what an awesome idea. Also, thanks for sharing this article :)
I’ve been trying to learn more about expressive type systems as opposed to compilers complaining about nonsensical code.
Could you please recommend some resources for further reading?
“OCaml from the very beginning” is a very nice book, and it’s not very expensive. For the non-strict way, http://haskellbook.com is good, but expensive.
If you want something free of charge, it’s a more difficult question. Stay away from “Learn You a Haskell”, it’s a very bad pedagody. Robert Harper’s “Programming in StandardML” (https://www.cs.cmu.edu/~rwh/isml/) is great and free, but it’s in, well, StandardML, the Latin of typed functional languages. You will have no problem switching to another syntax from it, but while you are in the SML land, you are on your own, with essentially no libs, no tooling, and not many people to ask for help. Tutorials on https://ocaml.org are good but not very extensive.
Just finished a blog post about Full Automation, probably I will do some more bugfixing on gambe.ro this afternoon.
Do you have a link? I’d love to read the post!
I do but it’s in italian: https://write.as/chobeat/la-piena-automazione-spiegata-al-mio-microonde
I might translate to English though, it’s very short.
I’m having some difficulty understanding the problem this solves. Can someone give me a use case for this?
There are a bunch of constructions that are common in dynamic languages which can’t be tidily expressed in most type systems.
This library implements tools to let many of those be type annotated without being too verbose.
If you have a background in strongly types languages, those constructions would seem nonsensical; nevertheless, they are common.
It allows one to manipulate/compute/change types so that you can have higher type safety. Thus making TS more flexible.
At work I’m adding code to embed some extra meta-data in the portable globe files we “cut” from Google Earth. Currently when our Android plugin imports a portable file it has to scan all of the imagery and terrain data packets looking for some metadata like boundaries, min/max zoom levels, and a few other bits of information. Needless to say, “walking” the whole file structure is a performance problem with larger files, so our solution is to pre-compute the metadata we need and embed that in the file at creation time. It’s been nice to actually write code after weeks of manual testing and bug fixing, and I’m learning a lot about the portable cut file format and the process for creating them, which is a lot of fun.
It’s a day off today, though, so I’m going for a longer bike ride, and then incorporating some recent upstream changes to Blend2D into my Common Lisp binding. Right now I can’t read or write files, so it’s critical to get the new APIs included. The downside to writing a binding to a pre-release library is that I seem to spend more time tracking changes and tweaking/fixing the binding than I do using it, but the recent changes are an improvement, and I’m learning the API as it evolves, so it’s really not too bad.
For the rest of the week, I’d like to get back to the animations I was creating with the Blend2D bindings. And I have some bike maintenance to do once some parts arrive.
I’ve also started going for daily walks with a new neighbor friend of mine. One downside of working from home is that it’s easy to stay in, and it’s nice to have the external motivation to go out and talk to somebody face to face.
I’m trying to pick up common lisp during free time. I’ve realized that working on bindings is something I may have to do often if I try making a more practical application.
Are your bindings available online? If so, I’d love to learn from them
You may be surprised - after about 5 years of CL I’ve only had to create my own bindings a couple of times. The cl-autowrap and CFFI packages make it relatively simple to write C bindings. C++ is a different story, and I’m not sure there’s a great way to create those, especially for template heavy libraries.
These Blend2D bindings are using cl-autowrap, which uses c2ffi and clang to generate bindings from a header file. The downside of cl-autowrap is that it creates (and exports) bindings to every function and type found while parsing a header file, including system functions and types.
I didn’t want to export all of those from the blend2d package, so I created a nested package named blend2d.ll which uses cl-autowrap and exports everything, and then another package, blend2d, which selectively exports functions from blend2d.ll. There may be a better way to do it, but this is working okay for now.
An alternative to cl-autowrap is to use CFFI directly. It’s easier to use for small libraries, or situations where you only need a handful of foreign functions. I used this technique a while back to write an incomplete binding to ZBar, a barcode scanning library.
I’m writing my blog with create-react-app. So far I’m having a lot of fun working out the subtleties of the design and what I want my blog to be like. Also brainstormed a bunch of ideas for blog posts.
I’m also working on putting out a stable release of https://nhooyr.io/websocket
And some other top secret stuff :)
I looked through the readme of the project and I just wanted to say I really appreciate that you had an entire section dedicated to justifying why the library is being written and a comparison to existing libraries.
Trying to juggle multiple projects while squeezing out a minimum viable Mitogen release supporting Ansible 2.8. Azure Pipelines is being an asshole, so I’ve downed tools for the evening
Thanks for your work on Mitogen! I’ve started using Ansible at work this month and it’s been a real joy to use partly thanks to Mitogen.
What is Mitogen? I tried looking through their website but couldn’t really grok it.
It seems like either an extension to ansible or an alternative runtime?