I use Signal, and you’d have to conclude thereby that I trust it to some extent. But I do get the feeling, over the years, that Moxie has made some really bad trade-offs in order to get Signal more widely used. I don’t think any of these trade-offs are as indefensible as Drew does, but they’re not good.
Requiring a phone number makes it easy for people to adopt Signal, because they can just use it as a drop-in replacement for their SMS app (which is important in the US, and also explains the crap features like gif search and half-assed stickers). But it also breaks the threat model where you don’t want to share your phone number – not a concern when you are worried about nation-state security forces, but a real issue for sex workers, social workers and therapists, people wanting to avoid harassment from exes, etc. I have definitely had friends who didn’t want to use Signal because they were unwilling to have something potentially leak their phone number. It also requires you to trust the app more, since it has to be able to access your phonebook.
I think the Play Store Only and No Federation trade-offs are similar. They probably do, actually, do more good than harm, only because there are more people in the categories who are benefited by them than in the categories harmed by them. But I think Moxie does overstate his case for them, unfairly dismisses the arguments against them, and underestimates the bad press that they generate.
(My personal messenger preference is Conversations, which uses XMPP+OMEMO, an adaptation of the Signal protocol to XMPP. But I recognize the difficulties of getting people to use it.)
If you think of signal as a more secure replacement for text messaging, then the use of a phone number seems very sane. If you look at it as a replacement for xmpp/whatsapp/etc, then not so much.
My guess is that Signal was aiming for the former as a primary use-case.
I dislike CloudFlare because they’re making the internet more centralized (the more small websites use them as a proxy, the less direct connections to small websites are made) and because of some infamous abuse handling incidents, but I would trust them 100000% more than my local ISP.
The local ISP knows where I live, the local ISP has to comply with local laws, the local ISP has monitoring installed by the local equivalent of the NSA. The local ISP didn’t even promise any privacy at all, which is worse than CloudFlare’s privacy policy for this resolver.
“the local ISP has monitoring installed by the local equivalent of the NSA”
You should assume Cloudfare does, too. They are a venture-funded, for-profit company operating in a surveillance state in ideal position to do surveillance. The NSA/FBI also pays or coerces compliance per Core Secrets leaks. The real question to determine if they won’t cooperate with the NSA is: “Will they turn down $30-$100+ million, go bankrupt, and/or go to prison for me?” If not, then they’ll likely cooperate. The cooperation also always mandates they lie about cooperating. They can promise government-proof anything while relaying data to the government.
Key word being local. If you live in a country that’s not very friendly to the US, it’s better to have NSA surveillance than local surveillance :)
Excellent point! I argued something similar in essay on using multiple, non-cooperative jurisdictions for security. :)
Couldn’t the opposite be just as true? If you live in a country that’s not friendly enough to the US, it may also be better to have local surveillance than NSA surveillance. If I know my government is out for my data, can’t easily access the stuff the US has, and isn’t sophisticated enough to upstream crypto algorithms into the Linux kernel or tap into underwater fibre cables, I’d pick local any day.
edit: plural
That’s true. However, your local ISP will still know where you connect. It will still see how much and if it’s unencrypted what you send/receive.
CloudFlare being a big target has to comply with some other country’s laws, as a US company it has to comply with NSLs, which might or might not exist in your local country. CloudFlare being a big company might also comply with other country’s laws - maybe not small ones, bug look at the list of companies that comply with China, etc.
Also this is actually not about your ISP vs CloudFlare. It’s about whatever you have configured vs ClfoudFlare. If Firefox starts making HTTPS requests to CF as a system administrator, when you expect DNS requests you might even miss them.
I think the problem is not that Firefox allows this, but that it’s skipping your system-wide configuration, without asking. After all I can already use CloudFlare’s DNS servers if I want to do so.
And then: CloudFlare makes its money by selling CDN features (including analytics, etc.) to companies, while my ISP makes money by selling internet to me. If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
The main point however is: I don’t think “overwriting” things like resolving hostnames is something an application should do, unless it’s asking or by design made to do so. In this case it’s not.
It will per default skip what you, your system administrator, etc. might have done to secure you.
It’s totally fine you trust CloudFlare more than your ISP/your local setup, but I don’t think it’s fine if a piece of software dictates and overwrites whom you trust silently, when you might already have consciously chosen someone else you trust.
If your ISP doesn’t promise any privacy (or has no privacy policy, as you make it sound like) maybe consider switching your ISP.
In most of the US, that isn’t feasible. Most places have at most two residential broadband providers: the phone company (typically AT&T), and the cable company (either Comcast or Spectrum, depending on location). And not counting MVNOs, there are, what, four mobile broadband providers?
I do basically agree with you that this may skip what your local sysadmin has done to secure you. But it’s making the trade-off that most people do not have a local sysadmin doing anything to secure you, and will never opt-in to anything to secure themselves.
I agree, but do not see how the genie can be put back into the bottle. Even Apple, who since forming the WHATWG have (failed at building their iAd advertising business and subsequently) decided that privacy is an important marketing differentiator, are limiting the worst excesses of browser tracking but not fundamentally disarming it.
Well there are several ways actually.
First we need that more people really understand these issues. That’s why I wrote this.
These are both huge browsers security issues and geopolitical ones.
Then we need browser vendors to fix them.
The DNS issue is something Governements should work out, technically and politically.
The JavaScript issue is easier to fix, as it’s entirely a software issue.
As a first step, browsers could mark as UNSAFE all web pages that use JavaScript, as they do with unencrypted HTTP sites these days.
Then it’s just matter of going back to semantic HyperTexts, with a better markup, better CSS and better typography in the browser. I’d like to see XHTML reconsidered, but with the lesson learned.
For example I would see well an <ADVERTISEMENT> tag.
But the main point is to avoid any Turing complete language in the browser.
The web has gotten to the point where it’s primarily used as a distribution mechanism for scripts. And, as hypertext, static html is not acceptable (having none of the normal guarantees about content stability that hypertext ought to make). So, if we’re going to drop the javascript sandbox, we might as well bite the bullet and also drop DNS and HTTP at the same time.
In other words, we replace one thing (“the browser”) with two things – a sandbox that downloads and runs scripts, and a proper hypertext browser & editor. Both of these should use a non-host-oriented addressing scheme for identifying chunks of content, and support serving out of its own cache to peers. (The way I’d do it is probably to run an ipfs daemon & then use it for both fetching & publishing, and then manage pinning and unpinning content based on some inter-peer communication protocol.)
I’ve made this suggestion before. (I’m pretty sure you’ve been privy to some of the discussions I’ve had about it on the fediverse.)
The point I’d like to underline here is: if the only thing salvagable about the web is the use of internal markup and host-oriented addressing schemes for static semantic hypertext, then nothing about the web is salvagable.
HTTP+HTML is an unacceptable hypertext system by 1989 standards, and thirty years on we should set our sights higher. Luckily, problems that were hard but not impossible in 1989 (like ensuring that data gets replicated between independent heterogenous nodes) have been made trivial in practice because of the proliferation of both high-speed always-on connections & solid, well-engineered open source packages.
Honestly, among the few hypertexts I used in the past (with GNU Info being the only one whose I remember the name), the web was the best one from several point of views.
I think XHTML and the related stack was pretty good, and I used XML Namespaces extensively to enrich web pages with semantic contents while preserving accessibility. But, don’t worry, I don’t dare to argue about hypertexts… with you! :-D
I welcome any proposal. And any experiment. And any hack.
I see these as huge security vulnerabilities in the very design of the Internet and the Web.
Now we need to fix them. I hope in Mozilla as they claim they care about security and privacy, and this actually put lifes at risk. So, we need to go back to the drawing board and design a better Web on top of the lessons we learned.
In other words, we replace one thing (“the browser”) with two things – a sandbox that downloads and runs scripts, and a proper hypertext browser & editor.
Agreed for the hypertext browser and editor.
But the fact you feel the need for a “sandbox that downloads and runs scripts” is just another symptom of the desease that made me create Jehanne. It’s basically a shortcoming of mainstream operating systems!
Unfortunately, hacking HTML and HTTP to patch this proved to be the wrong approach.
IMHO, we need a properly designed distributed operating system (and a better network protocol to serve such distributed computation).
the fact you feel the need for a “sandbox that downloads and runs scripts” is […] basically a shortcoming of mainstream operating systems!
Absolutely agreed. The web has basically become a package manager for unsafe code. Replacing that function with a dedicated sandbox is only an incremental improvement.
However – if we stop using URLs that can have their content changed at any time and start using addresses that have their own validation built in, then the problem of scripts being completely swapped out at runtime to target particular people and machines. This is an incremental improvement but a very important one.
we need a properly designed distributed operating system (and a better network protocol to serve such distributed computation)
Likewise, completely agreed. A proper distributed OS (as opposed to an ad-hoc layer for swapping sandboxed unsafe code) could be designed data-first instead of host-first (even though almost all distribution protocols, even erlang’s, are still host-oriented). The problems that web tech tries and fails to paper over are solved now by SSB, IPFS, bittorrent, CORD, and other systems – all open source and with documented designs.
SSB, IPFS, (and DAT?) all have a pretty severe drawback that’s tied to their biggest strength – immutability/non-deleteability. This gets you a lot of things that the original vision of hypertext wants (byte-range transclusion, external markup, links that never break), and I understand that’s why you feel strongly about it. But not being able to ever delete or edit anything in place (only publish updated versions as a new document) is not a humane basis for the web.
If these solutions really took off today, I’m pretty sure that in 10 years, we’d be begging for the horrors of 2018’s web. It would facilitate harassment and hate speech to an extent that would make twitter.com look like kitty.town, and moderation tools would be near impossible to implement.
I use and like SSB, but as it’s growing, it’s starting to show some of these problems. Due to founder effect, the community on there is pretty kind, but people are becoming aware of the weaknesses for moderation. No ability to delete or edit posts is a big one; the only fix is to define delete/patch messages that well-behaved clients will respect, but the original will always be available. And the way the protocol works, blocks (the foundation of a humane social media experience) are one-way only: if you block someone you can see them, but they can still see you. (Contrast to Mastodon and mainline Pleroma).
On the other hand, for the narrower case of delivering sandboxed code, IPFS or BitTorrent are basically exactly what you want, so my complaints here may not be strictly relevant.
Yeah, undeletability is a can of worms. It’s a huge legal problem, and a potentially large social problem.
It’s also fundamental to the basis of functioning hypertext.
SSB is mostly being used as a social network – a context where undeletability is a much bigger deal, since the expectation is that posts are being made off-the-cuff. Hypertext (excluding the web, of course) is usually thought of in terms of a publishing context – for distributing essays, criticism, deconstructions, syntheses, historical notes, anthologies, etc. (XanaduSpace even had a public/private distinction, where the full hypertext and versioning facilities were available for documents that were not made available to other users, with an eventual ‘publication’ step broadcasting a copy with previous revision history elided.) The expectations are much different, both around interest in archival by third parties, and around the expectation of privacy or deniability: hypertext is very much in the vein of, say, academic journal publishing[1].
In other words: what I’m looking for with regard to new hypertext systems will look less like a deeply-intertwingled Mastodon and more like a deeply-intertwingled Medium. (This probably even comes down to transcopyright. Nobody implements transcopyright, but the closest thing on the web is probably Medium’s open paywall, not token-word’s pseudo-transcopyright system.)
[1] The big projected commercial application for XanaduSpace was law offices: we would market it to paralegals, have a dedicated private permapub server for the law office pre-loaded with case law, and replace the current mechanisms they use for searching and discussing case law (which rely heavily on Microsoft Word’s “track changes” feature).
I don’t think WebAssembly makes things significantly worse than JavaScript alone. But I think the paradigm of running untrusted code in the browser without user confirmation is fundamentally bad. Opening a web page today (without noscript or umatrix, etc) is basically equivalent to installing an unsigned app on a mobile device. In the latter case, you at least get some indication of what permissions are needed, and a chance to refuse (even if none of that is an adequate protection in practice).
I think the last 10 years have really told us what JS features a dynamic web page actually needs, and those capabilities could be provided declaratively through additions to HTML and the browser, without requiring client-side scripting.
you at least get some indication of what permissions are needed, and a chance to refuse
You get the same on the Web.
No, you get stronger protection on the Web actually — a permission prompt is required even for push notifications, and arbitrary background activity is not allowed.
Its not that nobody knows how this stuff works when they use a web framework, its just that nobody wants to invest the time to make the same mistakes as other when it comes to edge cases and the like.
I dont know how this article is aimed at, but for someone who knows how this stuff works it’d be a PITA to rewrite everything from scratch for no good reason, but I do have to admit that knowing whats going on underneath is useful.
And some of the things he calls out as “oh-no! a framework!” are more library than framework. What stood out to me was Flask, which is a pretty thin abstraction over the http request-response cycle.
Brutalism as an architectonic style is disgusting and oppressive as shit (intentionally). I spent quite a bit of time in a brutalist building, I felt like shit. Like how did intentional hostility ever become a trend?
While the term certainly originates from concrete, the author is not trying to advocate making websites out of concrete (figuratively). I think the main point can be seen in the paragraph mentioning Truth to Materials. That is, don’t try to hide what the structure is made out of - and in the case of a website it is a hypertext document.
This website could be seen in that light. It is very minimally styled and operates exactly how the elements of the interface should (be expected to). The points of interaction are very clear.
The styling doesn’t even have to be minimal, but there is certainly a minimalism implied.
I respect your opinion, but I personally really enjoy brutalist architecture. I like the minimalism and utilitarian simplicity of the concrete exteriors, and I like how the style emphasizes the structure of the buildings.
I think if you added a splash of color it would make the environment much more enjoyable while still embracing the pragmatism and the seriousness.
It isn’t intentionally being oppressive or hostile. It represents pragmatism, modernity, and moral seriousness. However it doesn’t take a large logical jump to realize that pragmatism, modernity, and moral seriousness could feel oppressive. In the same way to the architects who designed brutalism, the indulgent designs of 1930’s-1940’s might feel like a spit in the face if you’re struggling to make ends meet. Neither were trying to hurt anyone, yet here we are.
I consider the 1930s designs (as can be seen in shows such as Poirot) to be rather elegant styling. But I also see the pragmatism that was prompted with the war shortages.
I am not a great fan of giant concrete structures that have no accommodation for natural lighting, but I also dislike the “glass monstrosities” that have been built after brutalist designs.
I find myself respecting the exterior of some of the brick buildings of the 19th Century and possibly early 20th. Western University in London Canada has many buildings with that style.
Some of the updates done to the Renaissance Center in Detroit have mitigated some of the problems with Brutalist - ironically with a lot of glass.
This might be true of Brutalism specifically, but (at least some) modern (“Modern”, “Post-modern”, etc.) architecture is deliberately hostile.
I found this article on that very topic pretty interesting.
In my home town, the public library and civic center (pool, gymnasium) are brutalist. It was really quite lovely. Especially the library was extremely cozy on the inside, with big open spaces with tables and little nooks with comfortable chairs.
My pet theory is that brutalism is a style that looks good in black-and-white photographs at the extent of looking good in real life. So it was successful in a time period when architects were judged mainly on black-and-white photographs of their buildings.
I fully agree with this mostly due to accessibility. I find that more and more websites are harder to read and navigate.
However for some problems I dont have solutions either, without bringing in some javascript, or changing browser internals:
Also I suspect people stress over custom design so much because the default stylesheet for the browser actually looks like crap.
I agree with this. A default stylesheet with better typography would do a lot to reduce the appeal of css frameworks.
A lot of the reasonable and valid use-cases for javascript probably ought to be moved into html attributes and the browser – things like “on click make a POST request and replace this element with the response body if it succeeds”. Kind of a “pave the cowpaths” approach that would allow dynamic front-ends without running arbitrary untrusted code on the client.
“on click make a POST request and replace this element with the response body if it succeeds”.
You can actually do that with a target attribute on the form going toward an iframe. It isn’t exactly the same but you can make it work.
The term “brutalist” for web design has been recently coined and seems to have taken on two meanings. One definition is design that highlights the essential nature of the web (links, buttons, navigation), as in the website linked in this post. The other definition of “brutalist” web design seems to be any design that goes against “minimalism” and “simplicity” advocated by corporate websites - think Bootstrap-themed websites. In a way, they are going against “usability” as dogma. An archive of websites conforming to the second definition is here. http://brutalistwebsites.com/
There’s a good article called Brutalism and Antidesign that tackles these two definitions, and argues that brutalism only really applies to the first one.
I use hugo for my personal blog. One thing the article touches on but doesn’t go into (for maybe obvious reasons) is that bundling and minification are not very useful anymore, unless your server, and the browsers of your readers, are very outdated.
Arguably minification was never useful — minified css or js will compress to very close to the same size as unminified js will, whether with gzip, which has been supported since the Elder Days, or brotli, which gives marginally better results. I’m not sure whether minification caught on because of cargo-culting, or whether the main goal was always obfuscation, not compression.
With HTTP/2, bundling is not very useful anymore. An article linked from the above article goes into a bunch of edge cases where some kind of bundling is still useful (basically avoid splitting assets that belong together just because you can). But more generally, bundling makes your site and its build process more complex, and breaks caching, in order to optimize for HTTP 1.1.
Government jobs tend to be 40 hours or less. State government in my state has a 37.5 hour standard. There is very occasional off-hours work, but overtime is never required except during emergencies – and not “business emergencies”, but, like, natural disasters.
I’m surprised that tech workers turn up their nose at government jobs. Sure, they pay less, but the benefits are amazing! And they really don’t pay too much less in the scheme of things.
How many private sector tech jobs have pensions? I bet not many.
I work in a city where 90% of the folks showing up to the local developer meetup are employed by the city or the state.
It’s taken a lot of getting used to being the only person in the room who doesn’t run Windows.
I feel like this is pretty much the same for me (aside from the meetup bit).
Have you ever worked with windows or have you been able to stay away from it professionally?
I used it on and off for a class for about a year in 2003 at university but have been able to avoid it other than that.
Yeah. I hadn’t used Windows since Win 3.1, until I started working for the state (in the Win XP era). I still don’t use it at home, but all my dayjob work is on Windows, and C#.
they pay less
Not sure about this one. When you speak about pay, you also have to count all the advantages going with it. In addition, they usually push you out at 5pm so your hourly rate is very close to the contractual one.
Most people who are complaining that they pay less are the tech workers who hustle hard in Silicon Valley or at one of the big N companies. While government jobs can pay really well and have excellent value especially when considered pay/hours and benefits like pensions, a Google employee’s ceiling is going to be way higher.
There’s a subreddit where software engineers share their salaries and it seems like big N companies can pay anything from $300k–700k USD when you consider their total package. No government job is going to match that.
I do.
Pros: hours, and benefits. Less trend-driven development and red queen effect. Less age discrimination (probably more diversity in general, at least compared to Silicon Valley).
Cons: low pay, hard to hire and retain qualified people. Bureaucracy can be galling, but I imagine that’s true in large private sector organizations, too.
We’re not that behind the times here; we’ve avoided some dead-ends by being just far enough behind the curve to see stuff fail before we can adopt it.
Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.
I will say, I once did a contract with the Virginia DOT during Peak Teaparty. Never before in my life have I seen a more downtrodden group. Every single person I talked to was there because they really believed in their work, and every single one of them was burdened by the reality that their organization didn’t and was cutting funding, cutting staff, and cutting… everything.
They were some of the best individuals I ever worked with, but within the worst organization I’ve ever interacted with.
Contrast that to New York State- I did a shitton of work for a few departments there. These were just folks who showed up to get things done. They were paid well, respected, and accomplished what they could within the confines of their organization. They also were up for letting work knock off at 2PM.
Also, depending on how well your agency’s goals align with your values, Don’t Be Evil can actually be realistic.
Agreed. There’s no such thing as an ethical corporation.
Do you mind sharing the minimum qualifications of a candidate at your institution? How necessary is a degree?
I’m asking for a friend 😏
No, not even them.
When you think about what “profit” is (ie taking more than you give), I think it’s really hard to defend any for-profit organization. Somebody has to lose in the exchange. If it’s not the customers, it’s the employees.
That’s a pretty cynical view of how trade works & not one I generally share. Except under situations of effective duress where one side has lopsided bargaining leverage over the other (e.g. monopolies, workers exploited because they have no better options), customers, employees and shareholders can all benefit. Sometimes this has negative externalities but not always.
Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.
Reciprocity is a natural law: markets function because both parties benefit from the exchange. As a nod to adsouza’s point: fully-informed, warrantied, productive, voluntary exchange makes markets.
Profit exists because you can organize against risk. Due to comparative advantage, you don’t even have to be better at it than your competitors. Voluntary exchange benefits both weaker and stronger parties.
Profit is revenue minus expenses. Your definition, taking more than you give, makes your conclusion a tautology. i.e., meaningless repetition.
I mean, yes, I was repeating myself. I wasn’t concluding anything: I was merely rephrasing “profit.” I’m not sure what you’re trying to get at here aside from fishing for a logical fallacy.
a tautology. i.e., meaningless repetition.
Intentionally meta?
Reciprocity is a natural law
Yup. No arguments here. However, reciprocity is not profit. In fact, that’s the very distinction I’m trying to make. Reciprocity is based on fairness and balance, that what you get should be equal to what you give. Profit is expecting to get back more than what you put in.
Profit exists because you can organize against risk.
Sure, but not all parties can profit simultaneously. There are winners and losers in the world of capitalism.
So, if I watch you from afar and realize that you’ll be in trouble within seconds, come to your aid, and save your life (without much effort on my side) in exchange for $10, who’s the one losing in this interaction? Personally, I don’t think there’s anything morally wrong with playing positive-sum games and sharing the profits with the other parties.
For an entry-level developer position, we want either a batchelor’s degree in an appropriate program, with no experience required, an associate’s degree and two years of experience, or no degree and four years of experience. The help-desk and technician positions probably require less for entry level but I’m not personally acquainted with their hiring process.
I would fall into the last category. Kind of rough being in the industry for 5 years and having to take an entry level job because I don’t have a piece of paper, but that’s how it goes.
For us, adding an AS (community college) to that 5 years of experience would probably get you into a level 2 position if your existing work is good. Don’t know how well that generalizes.
Okay cool! I have about an AS in credits from a community college I’d just need to graduate officially. Though, at that point, I might as well get a BS.
Thanks for helping me in my research :)
I don’t, but I’m very envious of my family members who do.
One time my cousin (works for the state’s Department of Forestry) replied to an email on Sunday and they told him to take 4 hours off Monday to balance it off.
That said, from a technological perspective I’d imagine it would be quite behind in times, and moves very slowly. If you’re a diehard agile manifesto person (I’m not) I probably wouldn’t recommend it.
EDIT: I guess it’s really what you value more. In the public sector, you get free time at the expense of money. In the private sector, vice versa. I can see someone who chases the latest technologies and loves to code all day long being miserable there, but for people who just code so they can live a fulfilling life outside of work it could be a good fit.
I wasn’t able to find an answer to which samples were deep learning generated, and which were Markov chain generated. It did seem that two of the samples were “better” in the sense of being more grammatical than the other two, and it would be nice to know if my impressions correspond to the different methods or not.
In general on other sites, though, I have noticed that deep learning results are not more realistic than what you’d come up with from M-x dissociated-press.
I played around with it and molten is awesome. Still not sure, whether type hints are pythonic or not, but this gave me another tilt towards finding it perfectly suitable, especially for the web dev use case.
This specific usage of the type annotations is nice in terms of visual clutter, but I think that it will require extra cognitive work to understand and write the code.
I disagree, type hints are awesome especially for things like this.
Usually, when your route takes e.g. a user_id you either have to use the right regex (in Django’s urls.py) or remember to validate it yourself in every route. Screw up once -> boom.
Molten, similar to Rocket for Rust, makes sure that you’re handler is only called when the passed user_id is a valid integer (assuming user_id: int).
This becomes even more valuable with more complex types.
I agree that type hints can be useful to make sure that these kinds of bugs are taken care of easily. But the way it is used in the Todo example is that the TodoManager class is passed as an annotation in the methods “list_todos” and “get_todos” . Why not just use a keyword argument for this?
Why not just use a keyword argument for this?
That “just” makes the problem seem less complicated than it really is! :D
Let’s say you’re using a framework like falcon that doesn’t provide DI support out of the box. Assuming you wanted to
then your API might look something like this:
from falcon import API
# Define Database and TodoManager here...
class TodosResource:
def __init__(self, todo_manager):
self.todo_manager = todo_manager
def on_get(self, req, resp):
resp.media = self.todo_manager.get_all()
def setup_app(db_impl=Database, todo_manager_impl=TodoManager):
db = db_impl()
todo_manager = todo_manager_impl(db)
app = API()
app.add_route("/todos", TodosResource(todo_manager))
return app
# In your entrypoint:
app = setup_app()
# In your tests:
app = setup_app(db_impl=StubDatabase)
This isn’t bad and it’s explicit, but manually configuring the dependency tree can get quite hairy over time and when a particular component’s dependencies change, rather than only changing that component’s code to depend on different deps, you’ll have to also change all of the places where the dependency tree is constructed (not a huge deal if you have this sort of setup_app function like I’ve got above, but not a particularly fun thing to do either way). Another downside here is that it’s not immediately clear what components the TodosResource.get method relies on from its header.
If, instead, you use type annotations to do the wiring for you then you get the best of all worlds in exchange for a little bit of magic (that, in my experience, people get used to fairly quickly).
I hope that makes sense!
Yeah, I was reminded a lot of Rocket when I was reading the examples. Which is a good thing. There are a lot of things in Rocket I wish I was able to do more transparently in Django.
This is a good guide, practically speaking.
Theoretically speaking, though, you can’t touch ActivityPub JSON without a fully-featured JSON-LD engine. Because the specification allows a lot of things to be either objects or collections or links, so you can’t trust what you get back without flattening. (IMO this is a reason it was a bad idea for ActivityPub and ActivityStreams2 to be based on JSON-LD. It should have been significantly more restrictive.)
And here I thought that it would be a boilerplate stating how your project is moving to GitLab/gogs/gitea and that the GitHub repository is just a mirror.
Unfortunately no. I stared the project before the news GitHub + Microsoft. I also made a copy https://gitlab.com/vladocar/boilerplate-readme-template. And yes, I’m angry that all my open source project are now under Microsoft. Should I go to GitLab? Who will guarantee that GitLab one day will be sold to Apple or other corporate. Should I do some private arrangements? I could but the point of all open source project is interaction with the community of developers. So I’m not sure what to do.
I’m hoping we see federation (maybe with ActivityPub, maybe with something else) implemented in GitLab/gogs/gitea, so that it doesn’t matter so much for discoverability where your code is hosted. Currently a combination of repository/issues/wiki and emailing pull requests is fine for development, but not so good for discoverability.
Until Microsoft shuts down the interactions, keep business as usual. Git makes it supremely easy to pull out if you have to.
There are tools to transfer issues etc as well, I’m told.
GitHub alone was a de-facto monopoly, just like Microsoft, so I don’t get the fuss. Sure, MS has a bad history, but it’s run differently now.
It’s worthwhile to be proactive. I intend to gradually move to self-hosted as I have time, and make my old github repositories mirrors of the new self-hosted ones.
The situation with Microsoft owning GitHub is not significantly worse than GitHub was before the acquisition, but it was already not great. It only took a change to tip things for a lot of people.
I haven’t had ads on my blog in over a decade. I’ve been meaning to remove the Facebook Page/Twitter widgets too when I get around to my redesign, since I’m pretty much giving both companies free information with them.
There’s a lot of implementations that load the widget once the user want to use them. They are pretty common in Germany and pretty much work by having the button “primed” with one click, which loads and activates the JS and the widget.
I think thats how most privacy extensions make them work. Disabled until you click them.
I have such buttons that work without Javascript. Just normal links.
You might consider using these or similar social sharing buttons without javascript or tracking.