Cloudflare Tunnel is free and a good solution for those behind CG-NATs or an ISP firewall. It also offers effortless DoS protection.
I will admit, however, that I think it’s slightly “cooler” in some sense to host your site directly from your home, with no assistance from Cloudflare or other giant tech companies, even if you don’t really get much tangible benefit from doing it that way.
(By these standards of course, my personal site is rather lame because it’s just your standard Jekyll + GitHub Pages site.)
What are the risks of port forwarding and hosting on home network? I get the general risk of giving the public internet direct access to my home devices. But how do people specifically exploit this? It depends on me misconfiguring or not properly locking down the web server, right?
Pretty much, but nobody has ever made an unhackable server. So even if you “properly” configure the server it’s not 100% secure because nothing is.
I did get my router hacked and it had third party malicious software installed on it and it didn’t function until I got the NetGear people to fix it which is why I installed fail2ban vibe has worked so far. But nothing is foolproof.
Let’s assume you forward port 443 to your Pi running Apache. You’re basically exposing the following bits of software to the Internet:
Your kernel’s TCP/IP stack
Apache
Any software you may choose to place behind an Apache reverse proxy
The biggest risk is an RCE in any of those pieces, because you’re truly pwned, but I’d lay pretty long odds against an RCE in the Linux network stack, and I don’t think your average Apache config is at much risk either – these things have both been highly battle-tested. Some sort of denial-of-service exploit is more likely but again, Linux+Apache have powered a huge chunk of the Internet for the last 25+ years. Now, if you write an HTTP server which executes arbitrary shell commands from the body of POST requests and proxy it behind Apache, you have only yourself to blame…
I expose HTTP and a few other services from my home network via port forwarding. I don’t lose sleep over it.
Transforming government services isn’t as easy as the tech bros and billionaires make it out to be.
But it often is simple. Don’t trust the people who are paid to “solve” a problem when they it is unsolvable. Organizations created to solve an issue may make it worse because their existence depends on the issues continued existence. I’d rather trust an outsider with a proven track record of simplifying processes and a general trail of success rather than someone whose employment and/or empire may well rely on a problem appearing unsolvable.
The worst thing that could happen to governments and politicians is people realizing we don’t really need them for most things where they have inserted themselves into our lives, so they keep certain problems going and unsolved, and sometimes actually cause a crisis and sell themselves as the solution.
Rest of the article was fine, idk why this obvious and absolutely misplaced shade throwing at the beginning was necessary.
Heh, yeah, I had the same thought. I worked at a startup that:
Had excellent staff who understood fast delivery and had an excellent track record of delivering excellent results in large bureaucratic organizations
Had direct access to the Digital Transformation group in our local state-level government as Paying Customer #1. They had a mandate to deliver solutions quickly and a team that was quite excellent. We had a fantastic working relationship with them.
Was continuously dragged down by other “stakeholders” within the government that, in my opinion, in many cases had very little business being involved but managed to convince the appropriate people that our projects were in-scope for them and that they needed to sign off on everything before each deployment.
One of my “favourite” moments was when the Director of Central IT raised a “security issue” very late in the process for a given deployment. This was in a very large stakeholder meeting.
Director: “Me and my staff have some significant concerns about the security of the deployment you’re about to do.”
Me: “Oh! Could you elaborate on that?”
Director: “I’ll follow up with an email. We will be voting to block the deployment until these concerns are addressed.”
After not getting any follow-up for a few days I started chasing him (because we were currently blocked from deployment…) and finally he responds with a PDF called “McAfee Top 10 Security Vulnerabilities in Web Applications”
Me: “I’ve reviewed the document you sent me and, from my point of view, we’re not deficient on any of those 10 items. Is there a specific item you’re concerned about that we could address for you?”
Director: “If you can’t see where your deficiencies are based on that document I’m not sure I can help you.”
In the end? I had to put him on the spot in another stakeholder meeting a few weeks later and get him to say “there are no specific security concerns we have at this time” after he, several times, raised these vague concerns as a blocker but couldn’t articulate any specifics. Deployment delayed for weeks over nothing.
I’ve been involved in quite a few government modernization projects. I’ve also run into people with similar dispositions as this security director. In my experience these interactions have been a cultural mismatch rather than the rent seeking behavior described by the post you’re responding to. There has been a culture in most government organizations, particularly around IT security and other risk categories, that revolves around checklists. They will have concerns unless you’ve come to the review meeting with some positive evidence of your diligence in detecting errors in a bunch of different areas. So his concerns weren’t so much that his staff identified specific vulnerabilities but that you’ve asserted you’re ready to deploy without furnishing evidence to support the claim. It all comes down to whether the leader of their department has something to provide to legislative committees, IGs, and other oversight groups to prove they weren’t negligent if/when something goes awry. I’ve had quite a bit of success helping these folks get comfortable with how modern development practices embed a lot of their checklists in the earlier stages of development.
If this happened in India I’d assume the person was looking for a bribe. The culture is that people get on committees with the mindset of feudal guards: they’re there not as guardians of the process but as extortionists. Society suffers.
I have heard the same sort of stories from people trying to sell products and services to any large organization, whether in the public or private sector. It sticks out because for public organizations they’re wasting the public’s time and money, instead of the shareholder’s.
IME, engineers rarely say a problem like this is “unsolvable”. It almost always boils down to people in charge not wanting to pay to do it the right way.
Don’t trust the people who are paid to “solve” a problem when they it is unsolvable. Organizations created to solve an issue may make it worse because their existence depends on the issues continued existence. I’d rather trust an outsider with a proven track record of simplifying processes and a general trail of success rather than someone whose employment and/or empire may well rely on a problem appearing unsolvable.
How would that even work? This outsider also depends on the issues for their existence, so the same incentives apply.
Outsiders can bring a fresh perspective, insiders have intimate domain knowledge that is not easily replicated. Both are valuable, and that has nothing to do with governments per se.
I silently deleted our PR checklist on one of our repos, just to see if anyone on the team would notice.
After a couple of months with no complaints, and no problems, I asked whether anyone noticed, and whether they would mind deleting it on the remaining repos…so that happened, and noone has missed them.
Our team conventions are enough:
Jira ticket should be mentioned in commit name (drive-by polishing does not need a ticket)
atomic commits
short-lived branches
a positive review by anyone in the company is all that’s required
No complex PR templates necessary. Git blame will give you good enough context to understand most changes.
For us, reviews are not for gatekeeping, but simply getting a second pair of eyes to check the changes we already know are going to happen, and knowledge sharing. All important changes are discussed up front, and code style/static analysis is dealt with automated tools as long as they have a low rate of false positives, so little effort is spent on nitpicking.
I recenty went through the opposite of this, which I’d call “value object obssession”. ended up with a lot of building and destructuring certain value objects, which really didn’t help me much at all.
My rule of thumb now is “has just using an int caused a test failure/bug?” If so, I wrap that up. If not, I try and avoid over engineering.
This is the way. Unless the used language makes creating and wrangling all these extra types almost painless, I would stay away from it. Otherwise 80% of the code devolves in handling all the transformations.
For real. As much as I do like TypeScript, its value is realized when in larger projects where contracts across code are needed because of the footguns you can run into JavaScript. But even then, modern JavaScript is good enough (IMO!) such that YAGNI TypeScript.
Different strokes, I guess. I wouldn’t write anything nontrivial in pure JS; it’s far too easy to misspell something or pass the wrong arg type or get args in the wrong order, and then not find out until runtime and have to debug a dumb mistake TS would have flagged the moment I typed it.
ha wild that typescript is the controversial part here. i haven’t encountered anyone advocating for full-stack javascript in years.
i think the overhead that defensive programming adds when using javascript justifies the added build process/tooling of typescript in anything but smaller scripts.
I’ve been building a medium-sized internal tool for my website, and I’ve debated many times whether I should switch to TypeScript for the superior IDE code analysis.
I chose not to, because JavaScript is good enough, and I really don’t wanna pull in the complexity of JavaScript build systems into my codebase. I really like that I can do cargo run and watch it go, without having to deal with npm or anything.
I’m sure you’re aware of this already, but just in case: have you tried using JSDoc-flavoured Typescript? You can write pretty much all Typescript types as JSDoc comments in a regular JS file. That way you get all the code analysis you want from your IDE (or even from the Typescript type checker using something like tsc --noEmit --allowJs), but you don’t need a separate build step. The result is typically more verbose than conventional Typescript, but for simple scripts it should work really well. I know the Svelte JS framework have gone down this route for various reasons — if you search for that, there might be some useful resources there.
I’ve recently started using a small amount of JSDoc for IDE completion, yeah. But I have written a small Lua codebase using this approach, and I can’t say I’m a huge fan; it gets old pretty quick.
Ended up being there for the ride of the project becoming the most popular open source music streaming bot with millions of users. We innovated on many fronts. Learned a ton about scaling software that does heavy computation, and the whole Java ecosystem in general.
As a result, for weeks a job ad on GitHub was haunting me until I finally applied, for what was basically a personal dream job at a gaming company in my hometown whose game I had obsessively and passionately played at an earlier age, which required exactly the skills/technology I was using for building open source stuff on GitHub.
Without the involvement in the open source development of the music streaming bot, neither would that job ad targeting have worked, nor would I have had the courage to attempt jumping into a mid-level role at such a locally popular company. Still there after over 5 years.
What they are describing is what the industry widely calls integration tests, not unit tests.
What I have encountered in the industry is that “unit” gets conflated with “class” - which then leads to shitty “unit” tests. People with this understanding of “unit”, rightly so, write rants about how unit tests suck.
Consider a different understanding of “unit”: several classes and perhaps even modules/packages working together to provide some functionality. Testing these makes for great tests, but in my experience this is often called an “integration” test in the industry.
People pro “unit” tests often have this kind of definition of a unit.
The whole unit VS integration testing debate to me feels like it’s simply a misunderstanding or maybe even disagreement on what “unit” means.
There was some elimination / guess work involved, but I did not hate it too much. Because every day where I read someone else code (or my past own one), I need to guess a lot about the intentions of the author and even the codebase itself based on imcomplete information and attempt to make the best judgement possible. So this was a good exercise imho.
I wrote you an email about Q3 (but going to share again for everyone) where I had trouble differentiating between B and C, because they are not mutually exclusive in certain languages, including the one you have chosen for the quiz.
In Java, one could have an enum with parameters, e.g.
public enum ProductDisplayType {
COMPACT(140, 200, 200), // or introduce the dimension type
DETAILED(500, 800, 600),
;
public final int maxDescriptionLength;
public final int width;
public final int height;
public ProductDisplayType(int maxDescriptionLength, int width, int height) {
// ... assignments
}
}
and reap both the benefits of avoiding conditionals when accessing the parameter, AND pattern matching, exhausting switch statements, etc (= compile time checks, yay) for other use cases.
I think the question is missing an explanation why using an enum with parameters would be worse over a plain class, or needs an improvement to disambiguate B and C.
The solution you shared is quite equivalent to the official solution. Note that it does not “contain an enum {COMPACT, DETAILED} or equivalent” but does contain the exact substring “(140, 200, 200)”. If you write it out using sum, product, and singleton types, you’ll find the solution you shared is quite close to the official solution except for being less open, and not at all isomorphic to the enum. See https://lobste.rs/s/ukoj9o/software_design_quiz#c_an0m2z for some discussion.
I wanted to avoid such Java-isms in the explanation, but there’s room for it in the extra discussion, and it will be added in the next version of the quiz.
It says simple in the title but then pulls in NPM and even TypeScript - nice bait.
There’s probably no way around npm, but TypeScript is absolutely not worth the trouble for your side project/hacks, just skip it. Embrace Vanilla JS. If you are coming from a background of solid typed languages, TypeScript is not what you think it is. It prevents almost none of the JS problems, you will have plenty of runtime errors. On the contrary, it gives you a fake sense of safety that will bite you over and over again. Embrace Vanilla JS and learn its quirks, instead of learning a leaky, thin veneer on top of it that brings more problems than it solves (in side projects / hacks).
Wow, couldn’t disagree more. As a static-language guy, discovering TypeScript made me willing to work on browser programming again; it is so much better than plain JS. It doesn’t catch all runtime type problems, but I find its type-checking catches a ton of issues at compile time, enough that I have fairly few runtime issues.
It took me a while to understand Twitter. It’s great. Please don’t change it. There’s a ton of interesting people there, I can aggressively filter my feed and pull it whenever I want to. Yes threads are clunky but it’s part of the charm. The flexibility it allows for communication is amazing.
Forcing thoughts into short pieces is not necessarily a bad thing. Easily linking and embedding images between posts is just good enough.
If the experience was more polished or there were more dedicated ways to do longer form writing, it really wouldn’t be the same. It would appeal to different people who write and consume different style of content. There are blogs for that, please stay there. No need to turn Twitter into a blogging platform.
AVMs FRITZ!Boxes, which are quite popular in Germany, have been doing the same thing for decades. They use fritz.box as their domain, which was probably pretty safe to use when tlds where limited to countries.
Nice. You have discovered two thirds of what Java’s Spring Framework has been doing since 2007 =)
What you might want to add is a Repository layer, that should help with some of the mentioned cons such as mocking/testability. There’s plenty of articles out there on the Controller-Service-Repository pattern
Good to see the Go ecosystem is maturing and taking over the sensible parts of the enterprise-y software that came before.
I think the difference here is that the Go standard library offers just an HTTP server interface and database interface and the Go ecosystem has created modular bits and bobs for particular functionality that you can choose to include, rather than having a single “framework” to work within. If you want a repository pattern, you can back it with handrolled SQL, use Sqlc.dev for semi-automatic generation from SQL, use Gorm for a traditional ORM, or Ent for a FaceBook graph-y ORM, etc. etc. There’s not a single blessed way to do it, which has both pros and cons.
Not universally applicable, but another advantage of print based debugging is that it pushes the system towards being more observable and thus helps with debugging production issues. If you can’t debug it from the logs during development, what hope do you have of debugging a prod issue?
Or, looking at it from the other end, if you have to put in enough logging to debug prod issues anyway, then the need for a debugger largely disappears as a result.
A solid majority of the software I’ve been responsible for at my day job has been some kind of web server process that is breaking for a specific user in prod. There is no way for me to use a debugger to diagnose the problem if I first hear about it in the context of error alarms reporting that 500 responses are getting served to live customers. Print debugging is the only tool I have.
There’s a key advantage to trace-based debugging: it can capture flows, rather than point in time. In debugging clang, for example, I generally see a crash when walking the AST that is caused by an error in the code that created that AST node. By the time that the debugger attaches to the crash, all of the state that was associated with the incorrect code is gone, other than the output. The debugger can let me introspect a load of ephemeral state associated with the victim code, but that isn’t useful.
Print-based debugging is the simplest case of trace-based debugging, where the traces are unstructured and ad-hoc. Structured logging is better, but requires a bit more infrastructure.
The biggest value of tools like valgrind is not that they catch memory-safety bugs, it’s that they automatically correlate these bugs with temporally distant elements. When I have a use-after-free bug in a program that I run with valgrind, it tells me where the use is but it also tells me when the object was allocated and when it was freed. These two extra pieces of information make it incredibly easy to fix the bug in comparison to having only the location of the use of the dangling pointer.
Depends on the type of project I guess, but in my corner of the industry, that would be rather complicated due to the deployment and security model. Logs are much easier.
I will preface this by saying it is always dangerous to perturb something by using atypical mechanisms (debugging apis eg ptrace, but also in many cases runtime flags that turn on rarely used instrumentation).
That said, don’t single step. Even outside of production threads or having a gui tends to make that awkward. Instead take advantage of programmable breakpoints and/or memory dumps. In the simplest form you can do printf debugging but with printfs that didn’t occur to you before the process was started.
Most operations are subject to timeouts that are immediately triggered by a debugger breakpoint. Which means every debugged operation basically immediately fails. Debuggers don’t really work in distributed systems, unfortunately.
They don’t seem to work on Safari on the desktop, either – some of them don’t work at all, some of them look and act a little weird (e.g. the horizontal wipe animation has some artefacts, and the way focus works on it is not very consistent). This might be WebKit related – I’m usually using (another) WebKit-based browser and they don’t seem to work here, either.
On the browsers where they work they actually look pretty cool and I think the author did a really god job of putting together cool demos that are also readable and pretty straightforward.
This is a very cool showcase for the author’s skill and technological inventiveness. Not so much for the underlying technology though. If this is what animating things on the web is shifting towards… on the one hand, I guess it beats doing it in JS, but on the other hand, boy am I glad I stopped doing any of this by the time Flash was on its way out. This does not look fun at all.
Firefox android worked for me for most of them…
Timing seems to be off on the iris and clock, but they still “fail” gracefully – ie transition when intended, just without the effect. The rest all seem to behave the same as on desktop.
Jetbrains needed to fire their UI team years ago when they proudly wrote that one blog post about how they made every icon look the same and monochrome. They obviously don’t understand anything about their end users, humans, and how brains and eyes work.
IntelliJ has been unusable without themeing plugins since then, and seeing how one of my coworkers is doing perfectly fine with Eclipse, I always thought it’ll be my first choice to try out when the Jetbrains design team forces their next abomination upon me. Looks like the time has come.
Agreed, iconography has regressed so much to a point where people have no idea what its purpose is. Remember FamFamFam icons [1], circa 2005? I think humanity peaked at this time. Few designers have conviction of their own anymore.
I really enjoy the icons of Haiku, and the colours… why do we have to live in a world without colour, we have a palette of at least 16M nowadays, and even more with HDR displays…
It puts a smile on my face whenever I spot Silk/Mini icons being used online. They’ve been around long enough that they sort of just blend into the background of the internet. Big thank you to Mark James for keeping the site around, it’s like a time capsule now.
In a world where many/most people used this, there would be a lot of redundant computation occurring.
I wonder if such a system could/should be built in a way that this could be exploited - effectively memoising it across all users. (In the same way that cloud storage exploits redundancy between users by using content addressable storage on the backend so they only need to store one copy of $POPULAR_FILE)
e.g. for compilation you’d probably want:
content-based-addressing of the data input (I am compiling something equivalent to foo.c)
content-based-addressing of the executable input (I am running gcc 2.6.1 on x86)
some kind of hash of the execution environment (cpuid flags, env vars, container OS?)
and probably some other bits and pieces (does executable access system calls like local “current time”). Could probably be made to work if the executables agreed to play nice and/or run inside a decent sandbox.
It would be challenging, but also very cool, to get this right.
I was using the example of compilation, but I think the general question is interesting. “We are performing this computation, with this executable code, on this input data, in this runtime environment (which is a special case of input data)”
If we can determine that (all essential) characteristics of these are the same as a previous run, then we can lookup the result.
I think there are interesting questions as to what constitutes inputs here (e.g. no-pure things like ‘time’ and ‘network’) and - moreso - what makes the executable code “the same” for this purpose. (What level do you work at - source code, binary etc).
Gradle has a very flexible task system - you can completely define the relevant inputs and dependencies of tasks by yourself. Often that will just be files, and some dependencies on the outputs of other tasks. The tricky part is usually defining all of them correctly. But once you do that, magic happens - tasks can be cached, and if you set up a distributed cache, it may even be shared amongst multiple machines (devs, CI ,etc)
A task doesn’t have to be compilation, can be anything that takes inputs and produces outputs, maybe you want to do some code generation, or whatever. I’m sure there are other build systems that are backed by a similarly flexible high-level task system, Gradle is just the one that I happen to know.
I’m not convinced this is the right approach for languages that don’t force error checks like go.
Most of the time I don’t want to use errors and exceptions to control the logic flow, that means they need to have all the relevant information attached to them at the place they happen, as they are going to be escalated.
Whenever I do expect a method call to fail frequently and expectedly, I prefer to build a dedicated return type that forces the caller to check for errors.
I’m kind of disappointed, since I was expecting something totally different based on the title.
Having done some minor work on Android applications throughout my high school and university days, I can name a much more significant problem with gradle than Groovy’s syntax: it’s slow! This is mentioned only in passing in this article, but Gradle is painfully slow, and even the smallest, non-android Java projects take a significant amount of time to build, every time. It’s awful if you’re trying to iterate or experiment! I’ve heard this complaint from others, too.
I really don’t buy into the provided criticisms of Groovy’s syntax. To me, the project definitions are quite readable, and I don’t see many reasons to think about how my project description is computed (and thus how and when the lambdas are invoked). It matters in the case of side effects like printing messages, but then, where did you expect a message to be printed? If you think of a task lambda as “where you describe a task” instead of “the task”, then it’s not all that surprising that I/O happens at configure time.
And then there are complaints about objects that are “just there”, like tasks and ext. If Gradle is a domain specific language, then these are just its “standard library”. print is “just there” in Python, Math modules are auto-imported in many languages, Make has the “phony” special case. Why should tasks be treated differently? Not being familiar with a language’s standard library is not a good reason to be complaining about the language.
The variable scoping mechanism is also not that unusual. In, say, Ruby, you can also access local variables from a lambda / block, but not from a function definition. Indeed, the former creates a closure (which, hey, is exactly what the Groovy guys call it!), and that maintains access to the variables that were around when it was declared, while the latter creates a function, which does not have access to the surrounding variables. Having to use a class definitely is a limitation of Groovy, but then, having static fields in that class makes sense, since static variables is precisely how you make “globals” in Java. And if you want a variable accessible from all functions in your file, is that not a global?
I do agree with the “one way to do things” sentiment, though. Groovy seems to provide a lot of flexibility in expressing even the most minute things, which can be paralyzing for beginners and frustrating for people working in teams. Unfortunately, most languages flexible enough to be bent into a build system will probably be flexible enough to allow many different approaches to solving problems.
Slow builds suck! There are a few things you can/should do to fully unlock Gradle’s potential:
Gradle has two phases: configuration and execution. The configuration phase has to run always, so make sure you you don’t have any expensive calls that get run there. This happens often when ppl write imperative stuff into e.g. their task configuretions. Make sure you don’t do that, instead ideally use plugins via buildSrc to contain the imperative logic. Also the latest versions ship with a cache for the configuration phase.
Make sure build caches, parallel builds in Gradle, and incremental options for your compilers are enabled.
If you are writing your own tasks, make sure their inputs and outputs are well defined. Gradle can then cache them. You can run your builds with the –info flag to see the reasons for why Gradle is considering a task out-of-date. Maybe some of your tasks are non-deterministic? That can easily happen but once you know the reason it’s also often easily fixed.
Getting the caching working and minimizing work done in the configuration phase are the main ingredients to get faster builds for iterating.
For larger builds, you might want to give the Gradle daemons larger heaps (I think 256m is the default setting?) so they don’t get thrashed by GC.
I would suggest cloudflared (cloudflare proxy) versus opening a port on your home router and port forwarding.
cloudflare regularly blocks my access to sites, from both home and work, so I am not a fan of cloudflare services…
Cloudflare Tunnel is free and a good solution for those behind CG-NATs or an ISP firewall. It also offers effortless DoS protection.
I will admit, however, that I think it’s slightly “cooler” in some sense to host your site directly from your home, with no assistance from Cloudflare or other giant tech companies, even if you don’t really get much tangible benefit from doing it that way.
(By these standards of course, my personal site is rather lame because it’s just your standard Jekyll + GitHub Pages site.)
Can the cloudflare proxy reach the server without opening a port, etc ?
Ah, I did not read close enough. This thing creates a tunnel: https://github.com/cloudflare/cloudflared
What are the risks of port forwarding and hosting on home network? I get the general risk of giving the public internet direct access to my home devices. But how do people specifically exploit this? It depends on me misconfiguring or not properly locking down the web server, right?
Pretty much, but nobody has ever made an unhackable server. So even if you “properly” configure the server it’s not 100% secure because nothing is.
I did get my router hacked and it had third party malicious software installed on it and it didn’t function until I got the NetGear people to fix it which is why I installed fail2ban vibe has worked so far. But nothing is foolproof.
Let’s assume you forward port 443 to your Pi running Apache. You’re basically exposing the following bits of software to the Internet:
The biggest risk is an RCE in any of those pieces, because you’re truly pwned, but I’d lay pretty long odds against an RCE in the Linux network stack, and I don’t think your average Apache config is at much risk either – these things have both been highly battle-tested. Some sort of denial-of-service exploit is more likely but again, Linux+Apache have powered a huge chunk of the Internet for the last 25+ years. Now, if you write an HTTP server which executes arbitrary shell commands from the body of POST requests and proxy it behind Apache, you have only yourself to blame…
I expose HTTP and a few other services from my home network via port forwarding. I don’t lose sleep over it.
Oh I didn’t know they had a free tier but it looks like they do! I’ll look into it.
Also are you the same whalesalad on HN that gave me the advice on the browser text width?
But it often is simple. Don’t trust the people who are paid to “solve” a problem when they it is unsolvable. Organizations created to solve an issue may make it worse because their existence depends on the issues continued existence. I’d rather trust an outsider with a proven track record of simplifying processes and a general trail of success rather than someone whose employment and/or empire may well rely on a problem appearing unsolvable. The worst thing that could happen to governments and politicians is people realizing we don’t really need them for most things where they have inserted themselves into our lives, so they keep certain problems going and unsolved, and sometimes actually cause a crisis and sell themselves as the solution.
Rest of the article was fine, idk why this obvious and absolutely misplaced shade throwing at the beginning was necessary.
Also don’t trust people who say it is simple when they don’t understand the whole problem, and don’t have the time and budget constraints.
Heh, yeah, I had the same thought. I worked at a startup that:
One of my “favourite” moments was when the Director of Central IT raised a “security issue” very late in the process for a given deployment. This was in a very large stakeholder meeting.
Director: “Me and my staff have some significant concerns about the security of the deployment you’re about to do.”
Me: “Oh! Could you elaborate on that?”
Director: “I’ll follow up with an email. We will be voting to block the deployment until these concerns are addressed.”
After not getting any follow-up for a few days I started chasing him (because we were currently blocked from deployment…) and finally he responds with a PDF called “McAfee Top 10 Security Vulnerabilities in Web Applications”
Me: “I’ve reviewed the document you sent me and, from my point of view, we’re not deficient on any of those 10 items. Is there a specific item you’re concerned about that we could address for you?”
Director: “If you can’t see where your deficiencies are based on that document I’m not sure I can help you.”
In the end? I had to put him on the spot in another stakeholder meeting a few weeks later and get him to say “there are no specific security concerns we have at this time” after he, several times, raised these vague concerns as a blocker but couldn’t articulate any specifics. Deployment delayed for weeks over nothing.
I’ve been involved in quite a few government modernization projects. I’ve also run into people with similar dispositions as this security director. In my experience these interactions have been a cultural mismatch rather than the rent seeking behavior described by the post you’re responding to. There has been a culture in most government organizations, particularly around IT security and other risk categories, that revolves around checklists. They will have concerns unless you’ve come to the review meeting with some positive evidence of your diligence in detecting errors in a bunch of different areas. So his concerns weren’t so much that his staff identified specific vulnerabilities but that you’ve asserted you’re ready to deploy without furnishing evidence to support the claim. It all comes down to whether the leader of their department has something to provide to legislative committees, IGs, and other oversight groups to prove they weren’t negligent if/when something goes awry. I’ve had quite a bit of success helping these folks get comfortable with how modern development practices embed a lot of their checklists in the earlier stages of development.
If this happened in India I’d assume the person was looking for a bribe. The culture is that people get on committees with the mindset of feudal guards: they’re there not as guardians of the process but as extortionists. Society suffers.
I have heard the same sort of stories from people trying to sell products and services to any large organization, whether in the public or private sector. It sticks out because for public organizations they’re wasting the public’s time and money, instead of the shareholder’s.
IME, engineers rarely say a problem like this is “unsolvable”. It almost always boils down to people in charge not wanting to pay to do it the right way.
How would that even work? This outsider also depends on the issues for their existence, so the same incentives apply.
Outsiders can bring a fresh perspective, insiders have intimate domain knowledge that is not easily replicated. Both are valuable, and that has nothing to do with governments per se.
Can also recommend Greenmail for similar purposes in the Java ecosystem.
I silently deleted our PR checklist on one of our repos, just to see if anyone on the team would notice.
After a couple of months with no complaints, and no problems, I asked whether anyone noticed, and whether they would mind deleting it on the remaining repos…so that happened, and noone has missed them.
Our team conventions are enough:
No complex PR templates necessary. Git blame will give you good enough context to understand most changes.
For us, reviews are not for gatekeeping, but simply getting a second pair of eyes to check the changes we already know are going to happen, and knowledge sharing. All important changes are discussed up front, and code style/static analysis is dealt with automated tools as long as they have a low rate of false positives, so little effort is spent on nitpicking.
I recenty went through the opposite of this, which I’d call “value object obssession”. ended up with a lot of building and destructuring certain value objects, which really didn’t help me much at all.
My rule of thumb now is “has just using an int caused a test failure/bug?” If so, I wrap that up. If not, I try and avoid over engineering.
This is the way. Unless the used language makes creating and wrangling all these extra types almost painless, I would stay away from it. Otherwise 80% of the code devolves in handling all the transformations.
Choose one.
For real. As much as I do like TypeScript, its value is realized when in larger projects where contracts across code are needed because of the footguns you can run into JavaScript. But even then, modern JavaScript is good enough (IMO!) such that YAGNI TypeScript.
Different strokes, I guess. I wouldn’t write anything nontrivial in pure JS; it’s far too easy to misspell something or pass the wrong arg type or get args in the wrong order, and then not find out until runtime and have to debug a dumb mistake TS would have flagged the moment I typed it.
(Why yes, I am a C++/Rust programmer.)
ha wild that typescript is the controversial part here. i haven’t encountered anyone advocating for full-stack javascript in years.
i think the overhead that defensive programming adds when using javascript justifies the added build process/tooling of typescript in anything but smaller scripts.
There has been some talk about it recently, mostly started by DHH’s No Build blogpost
funny enough it is one of plainweb’s main principles to not have build processes (well almost, it uses esbuild until node can be replaced by bun)
especially frontend build processes are a major source of complexity that are imo not worth it for most web apps.
I’ve been building a medium-sized internal tool for my website, and I’ve debated many times whether I should switch to TypeScript for the superior IDE code analysis.
I chose not to, because JavaScript is good enough, and I really don’t wanna pull in the complexity of JavaScript build systems into my codebase. I really like that I can do
cargo runand watch it go, without having to deal with npm or anything.I’m sure you’re aware of this already, but just in case: have you tried using JSDoc-flavoured Typescript? You can write pretty much all Typescript types as JSDoc comments in a regular JS file. That way you get all the code analysis you want from your IDE (or even from the Typescript type checker using something like
tsc --noEmit --allowJs), but you don’t need a separate build step. The result is typically more verbose than conventional Typescript, but for simple scripts it should work really well. I know the Svelte JS framework have gone down this route for various reasons — if you search for that, there might be some useful resources there.I’ve recently started using a small amount of JSDoc for IDE completion, yeah. But I have written a small Lua codebase using this approach, and I can’t say I’m a huge fan; it gets old pretty quick.
Somehow back in 2017 I got involved with Discord music streaming bots over an annoying lack of bold font when listing the queued songs.
Ended up being there for the ride of the project becoming the most popular open source music streaming bot with millions of users. We innovated on many fronts. Learned a ton about scaling software that does heavy computation, and the whole Java ecosystem in general.
As a result, for weeks a job ad on GitHub was haunting me until I finally applied, for what was basically a personal dream job at a gaming company in my hometown whose game I had obsessively and passionately played at an earlier age, which required exactly the skills/technology I was using for building open source stuff on GitHub.
Without the involvement in the open source development of the music streaming bot, neither would that job ad targeting have worked, nor would I have had the courage to attempt jumping into a mid-level role at such a locally popular company. Still there after over 5 years.
What they are describing is what the industry widely calls integration tests, not unit tests.
What I have encountered in the industry is that “unit” gets conflated with “class” - which then leads to shitty “unit” tests. People with this understanding of “unit”, rightly so, write rants about how unit tests suck. Consider a different understanding of “unit”: several classes and perhaps even modules/packages working together to provide some functionality. Testing these makes for great tests, but in my experience this is often called an “integration” test in the industry. People pro “unit” tests often have this kind of definition of a unit.
The whole unit VS integration testing debate to me feels like it’s simply a misunderstanding or maybe even disagreement on what “unit” means.
Yes. It is harmful to contrast unit and integration tests because:
https://matklad.github.io/2022/07/04/unit-and-integration-tests.html
Agree that the distinction is not worth wringing your hands over. The things that really matter:
You want tests that are fast and portable. If using a real filesystem or network socket is fast, then run that stuff in CI on every push.
Indeed! Besides performance, you also want to look at:
This was good, thank you.
There was some elimination / guess work involved, but I did not hate it too much. Because every day where I read someone else code (or my past own one), I need to guess a lot about the intentions of the author and even the codebase itself based on imcomplete information and attempt to make the best judgement possible. So this was a good exercise imho.
I wrote you an email about Q3 (but going to share again for everyone) where I had trouble differentiating between B and C, because they are not mutually exclusive in certain languages, including the one you have chosen for the quiz.
In Java, one could have an enum with parameters, e.g.
and reap both the benefits of avoiding conditionals when accessing the parameter, AND pattern matching, exhausting switch statements, etc (= compile time checks, yay) for other use cases. I think the question is missing an explanation why using an enum with parameters would be worse over a plain class, or needs an improvement to disambiguate B and C.
Hi Napster,
The solution you shared is quite equivalent to the official solution. Note that it does not “contain an enum
{COMPACT, DETAILED}or equivalent” but does contain the exact substring “(140, 200, 200)”. If you write it out using sum, product, and singleton types, you’ll find the solution you shared is quite close to the official solution except for being less open, and not at all isomorphic to the enum. See https://lobste.rs/s/ukoj9o/software_design_quiz#c_an0m2z for some discussion.I wanted to avoid such Java-isms in the explanation, but there’s room for it in the extra discussion, and it will be added in the next version of the quiz.
BTW, I checked spam and don’t see your E-mail.
It says simple in the title but then pulls in NPM and even TypeScript - nice bait. There’s probably no way around npm, but TypeScript is absolutely not worth the trouble for your side project/hacks, just skip it. Embrace Vanilla JS. If you are coming from a background of solid typed languages, TypeScript is not what you think it is. It prevents almost none of the JS problems, you will have plenty of runtime errors. On the contrary, it gives you a fake sense of safety that will bite you over and over again. Embrace Vanilla JS and learn its quirks, instead of learning a leaky, thin veneer on top of it that brings more problems than it solves (in side projects / hacks).
Wow, couldn’t disagree more. As a static-language guy, discovering TypeScript made me willing to work on browser programming again; it is so much better than plain JS. It doesn’t catch all runtime type problems, but I find its type-checking catches a ton of issues at compile time, enough that I have fairly few runtime issues.
It took me a while to understand Twitter. It’s great. Please don’t change it. There’s a ton of interesting people there, I can aggressively filter my feed and pull it whenever I want to. Yes threads are clunky but it’s part of the charm. The flexibility it allows for communication is amazing.
Forcing thoughts into short pieces is not necessarily a bad thing. Easily linking and embedding images between posts is just good enough.
If the experience was more polished or there were more dedicated ways to do longer form writing, it really wouldn’t be the same. It would appeal to different people who write and consume different style of content. There are blogs for that, please stay there. No need to turn Twitter into a blogging platform.
AVMs FRITZ!Boxes, which are quite popular in Germany, have been doing the same thing for decades. They use fritz.box as their domain, which was probably pretty safe to use when tlds where limited to countries.
Fritz!Boxes use their DNS server, they do not man in the middle port 53. Or at least mine does.
Yeah, I had various Fritz!Boxes over the years and if you use another DNS server on a machine, the
fritz.boxname just fails to resolve.Netgear business wifi access points do the same, if you’re using their DNS then there’s an easy config host.
Nice. You have discovered two thirds of what Java’s Spring Framework has been doing since 2007 =)
What you might want to add is a Repository layer, that should help with some of the mentioned cons such as mocking/testability. There’s plenty of articles out there on the Controller-Service-Repository pattern
Good to see the Go ecosystem is maturing and taking over the sensible parts of the enterprise-y software that came before.
I think the difference here is that the Go standard library offers just an HTTP server interface and database interface and the Go ecosystem has created modular bits and bobs for particular functionality that you can choose to include, rather than having a single “framework” to work within. If you want a repository pattern, you can back it with handrolled SQL, use Sqlc.dev for semi-automatic generation from SQL, use Gorm for a traditional ORM, or Ent for a FaceBook graph-y ORM, etc. etc. There’s not a single blessed way to do it, which has both pros and cons.
Not universally applicable, but another advantage of print based debugging is that it pushes the system towards being more observable and thus helps with debugging production issues. If you can’t debug it from the logs during development, what hope do you have of debugging a prod issue?
Or, looking at it from the other end, if you have to put in enough logging to debug prod issues anyway, then the need for a debugger largely disappears as a result.
A solid majority of the software I’ve been responsible for at my day job has been some kind of web server process that is breaking for a specific user in prod. There is no way for me to use a debugger to diagnose the problem if I first hear about it in the context of error alarms reporting that 500 responses are getting served to live customers. Print debugging is the only tool I have.
There’s a key advantage to trace-based debugging: it can capture flows, rather than point in time. In debugging clang, for example, I generally see a crash when walking the AST that is caused by an error in the code that created that AST node. By the time that the debugger attaches to the crash, all of the state that was associated with the incorrect code is gone, other than the output. The debugger can let me introspect a load of ephemeral state associated with the victim code, but that isn’t useful.
Print-based debugging is the simplest case of trace-based debugging, where the traces are unstructured and ad-hoc. Structured logging is better, but requires a bit more infrastructure.
The biggest value of tools like valgrind is not that they catch memory-safety bugs, it’s that they automatically correlate these bugs with temporally distant elements. When I have a use-after-free bug in a program that I run with valgrind, it tells me where the use is but it also tells me when the object was allocated and when it was freed. These two extra pieces of information make it incredibly easy to fix the bug in comparison to having only the location of the use of the dangling pointer.
Why not connect the debugger to prod?
Depends on the type of project I guess, but in my corner of the industry, that would be rather complicated due to the deployment and security model. Logs are much easier.
Can you guarantee the application won’t halt for everyone while you’re taking your time single-stepping through it?
I will preface this by saying it is always dangerous to perturb something by using atypical mechanisms (debugging apis eg ptrace, but also in many cases runtime flags that turn on rarely used instrumentation).
That said, don’t single step. Even outside of production threads or having a gui tends to make that awkward. Instead take advantage of programmable breakpoints and/or memory dumps. In the simplest form you can do printf debugging but with printfs that didn’t occur to you before the process was started.
Most operations are subject to timeouts that are immediately triggered by a debugger breakpoint. Which means every debugged operation basically immediately fails. Debuggers don’t really work in distributed systems, unfortunately.
Not a single one of the examples works on my mobile browser (using latest Firefox) :/
They don’t seem to work on Safari on the desktop, either – some of them don’t work at all, some of them look and act a little weird (e.g. the horizontal wipe animation has some artefacts, and the way focus works on it is not very consistent). This might be WebKit related – I’m usually using (another) WebKit-based browser and they don’t seem to work here, either.
On the browsers where they work they actually look pretty cool and I think the author did a really god job of putting together cool demos that are also readable and pretty straightforward.
This is a very cool showcase for the author’s skill and technological inventiveness. Not so much for the underlying technology though. If this is what animating things on the web is shifting towards… on the one hand, I guess it beats doing it in JS, but on the other hand, boy am I glad I stopped doing any of this by the time Flash was on its way out. This does not look fun at all.
Firefox android worked for me for most of them… Timing seems to be off on the iris and clock, but they still “fail” gracefully – ie transition when intended, just without the effect. The rest all seem to behave the same as on desktop.
Personally I’m a fan of given-when-then.
Jetbrains needed to fire their UI team years ago when they proudly wrote that one blog post about how they made every icon look the same and monochrome. They obviously don’t understand anything about their end users, humans, and how brains and eyes work. IntelliJ has been unusable without themeing plugins since then, and seeing how one of my coworkers is doing perfectly fine with Eclipse, I always thought it’ll be my first choice to try out when the Jetbrains design team forces their next abomination upon me. Looks like the time has come.
Agreed, iconography has regressed so much to a point where people have no idea what its purpose is. Remember FamFamFam icons [1], circa 2005? I think humanity peaked at this time. Few designers have conviction of their own anymore.
[1] http://www.famfamfam.com/
I see a ranty blog post in my future :-).
I really enjoy the icons of Haiku, and the colours… why do we have to live in a world without colour, we have a palette of at least 16M nowadays, and even more with HDR displays…
I didn’t know what those icons were called. Very nostalgic nowadays to me, although they still look great!
It puts a smile on my face whenever I spot Silk/Mini icons being used online. They’ve been around long enough that they sort of just blend into the background of the internet. Big thank you to Mark James for keeping the site around, it’s like a time capsule now.
In a world where many/most people used this, there would be a lot of redundant computation occurring.
I wonder if such a system could/should be built in a way that this could be exploited - effectively memoising it across all users. (In the same way that cloud storage exploits redundancy between users by using content addressable storage on the backend so they only need to store one copy of $POPULAR_FILE)
e.g. for compilation you’d probably want:
and probably some other bits and pieces (does executable access system calls like local “current time”). Could probably be made to work if the executables agreed to play nice and/or run inside a decent sandbox.
It would be challenging, but also very cool, to get this right.
Gradle has a distributed cache that can probably do that.
That’s interesting, thank you.
I was using the example of compilation, but I think the general question is interesting. “We are performing this computation, with this executable code, on this input data, in this runtime environment (which is a special case of input data)”
If we can determine that (all essential) characteristics of these are the same as a previous run, then we can lookup the result.
I think there are interesting questions as to what constitutes inputs here (e.g. no-pure things like ‘time’ and ‘network’) and - moreso - what makes the executable code “the same” for this purpose. (What level do you work at - source code, binary etc).
Gradle has a very flexible task system - you can completely define the relevant inputs and dependencies of tasks by yourself. Often that will just be files, and some dependencies on the outputs of other tasks. The tricky part is usually defining all of them correctly. But once you do that, magic happens - tasks can be cached, and if you set up a distributed cache, it may even be shared amongst multiple machines (devs, CI ,etc) A task doesn’t have to be compilation, can be anything that takes inputs and produces outputs, maybe you want to do some code generation, or whatever. I’m sure there are other build systems that are backed by a similarly flexible high-level task system, Gradle is just the one that I happen to know.
Llama (https://github.com/nelhage/llama) does a few of these (like content addressing). I have played around with it a bit and its been a joy.
I’m not convinced this is the right approach for languages that don’t force error checks like go. Most of the time I don’t want to use errors and exceptions to control the logic flow, that means they need to have all the relevant information attached to them at the place they happen, as they are going to be escalated.
Whenever I do expect a method call to fail frequently and expectedly, I prefer to build a dedicated return type that forces the caller to check for errors.
I’m kind of disappointed, since I was expecting something totally different based on the title.
Having done some minor work on Android applications throughout my high school and university days, I can name a much more significant problem with gradle than Groovy’s syntax: it’s slow! This is mentioned only in passing in this article, but Gradle is painfully slow, and even the smallest, non-android Java projects take a significant amount of time to build, every time. It’s awful if you’re trying to iterate or experiment! I’ve heard this complaint from others, too.
I really don’t buy into the provided criticisms of Groovy’s syntax. To me, the project definitions are quite readable, and I don’t see many reasons to think about how my project description is computed (and thus how and when the lambdas are invoked). It matters in the case of side effects like printing messages, but then, where did you expect a message to be printed? If you think of a task lambda as “where you describe a task” instead of “the task”, then it’s not all that surprising that I/O happens at configure time.
And then there are complaints about objects that are “just there”, like
tasksandext. If Gradle is a domain specific language, then these are just its “standard library”.printis “just there” in Python, Math modules are auto-imported in many languages, Make has the “phony” special case. Why shouldtasksbe treated differently? Not being familiar with a language’s standard library is not a good reason to be complaining about the language.The variable scoping mechanism is also not that unusual. In, say, Ruby, you can also access local variables from a lambda / block, but not from a function definition. Indeed, the former creates a closure (which, hey, is exactly what the Groovy guys call it!), and that maintains access to the variables that were around when it was declared, while the latter creates a function, which does not have access to the surrounding variables. Having to use a class definitely is a limitation of Groovy, but then, having static fields in that class makes sense, since static variables is precisely how you make “globals” in Java. And if you want a variable accessible from all functions in your file, is that not a global?
I do agree with the “one way to do things” sentiment, though. Groovy seems to provide a lot of flexibility in expressing even the most minute things, which can be paralyzing for beginners and frustrating for people working in teams. Unfortunately, most languages flexible enough to be bent into a build system will probably be flexible enough to allow many different approaches to solving problems.
Slow builds suck! There are a few things you can/should do to fully unlock Gradle’s potential:
Gradle has two phases: configuration and execution. The configuration phase has to run always, so make sure you you don’t have any expensive calls that get run there. This happens often when ppl write imperative stuff into e.g. their task configuretions. Make sure you don’t do that, instead ideally use plugins via buildSrc to contain the imperative logic. Also the latest versions ship with a cache for the configuration phase.
Make sure build caches, parallel builds in Gradle, and incremental options for your compilers are enabled.
If you are writing your own tasks, make sure their inputs and outputs are well defined. Gradle can then cache them. You can run your builds with the –info flag to see the reasons for why Gradle is considering a task out-of-date. Maybe some of your tasks are non-deterministic? That can easily happen but once you know the reason it’s also often easily fixed.
Getting the caching working and minimizing work done in the configuration phase are the main ingredients to get faster builds for iterating.
For larger builds, you might want to give the Gradle daemons larger heaps (I think 256m is the default setting?) so they don’t get thrashed by GC.
Best of luck :)