I wonder if the author tried a two-step parsing approach: A first one that parses into a struct
that only contains the @context
information and another one that uses a struct depending on the value of @context
.
I love their writing style:
So we were left with the pendulum equation: […] where T is the pendulum swing period, L length of the pendulum and g the gravitational acceleration. Changing the 2, or π is difficult even for CERN. So we are left with g, and L.
A bit later on:
It may require going from cryogenic temperatures to red hot. Not like we would not have 150 tons of liquid Helium here, but it is already booked for another experiment [the Large Hadron Collider].
Wait you’re telling me that we’ve been blasting the current time via radio waves across the continent since 1963 and I still have to reprogram my oven’s clock when I lose power???
You should be thankful that your microwave/oven clock keeps time as well as it does. I bought an alarm clock that manages to lose three minutes a month. What’s more, it was a brand name (Sony), Googling reveals everyone has this problem, and the clock is especially designed to be “extra automatic timekeeping” by having a battery backup in case of power outages, automatically calculating DST, and having sliders to choose the time zone. Yet they couldn’t afford to use a crystal oscillator.
Any appliance connected to the grid has no excuse to fail at timekeeping. The power company takes great pains to average sixty cycles/second. Picking off and counting the signal is a simple circuit and easy enough for a four-bit processor.
Some of these clocks are probably using tuning-fork style 32768 Hz crystal oscillators as found in wristwatches. They have a bit of a parabolic temperature dependence. In the original wrist-mounted application they do better, “ovenized” at about 98.6° F for living persons.
Yup, this is known as TEC (Time Error Correction). Unfortunately due to cost-cutting a great many clocks and other electronics that run solely or primarily off of AC power no longer plug directly into the power line but use an off-the-shelf AC-DC wall wart (internally or externally) - or, increasingly, a USB plug - and so can no longer avail themselves of the mains frequency for tracking time.
Power companies have been trying to weasel out of this responsibility for a couple of decades now as it simplifies things and reduces costs on their end. I wouldn’t be surprised to see TEC go sometime in the next ten years, tbh.
My Sony radio alarm clock did not have that excuse.
So what happens if the for-profit declines this request. Would the community be able to fork Gitea, come up with a new name, establish a governing body, and keep pushing forward on their own path?
Gitea started life as a fork of Gogs, so this seems entirely plausible, or even desirable, for two big reasons:
Large enterprise users have very different needs from indies and small communities, or even larger OSS projects. SSO, compliance, integration with e.g. “mature” devops tools like Jenkins and Artifactory, etc. all tend to drive enterprise usage. Smaller-scale users often care more about ease of installation and use, design and quality of the actual code, and openness to outside contributors.
DAO. ‘Nuf said.
SSO - Single Sign On
DAO - Decentralized Autonomous Organization; basically a bunch of people who get together through the power of CRYPTO to achieve common goals. Right now synonymous with “scam”, but that’s mostly because the default way of making money in crypto is via ponzi schemes[1].
[1] and yes, I count “yield farming” as a ponzi scheme.
no, they want to experiment with a DAO, but this is usually enough to get people to write off Gitea permanently
No it is not! “decentralized autonomous organization” is basically a synonym of “the FOSS community” it’s just that some people have subverted the literal meaning of words with particular implementations that are beyond terrifying. That does not mean it cannot be done better but apparently some people are unwilling to admit that the emperor wears no clothes and resort to surface level prejudice and childish dismissals.
If these (gitea) people have integrity and they want to find fair ways to organize this project then power to them! If not, then the reason for dismissing them is not because they used some words but rather because they did or tried to do something dishonest (which is not immediately apparent as soon as you claim you want to try to use cryptography to partially automate your organization).
I refer to my earlier comment on the initial announcement for context to interpret this one.
“decentralized autonomous organization” is basically a synonym of “the FOSS community”
I have never heard this definition, and I’ve been following FOSS since the late 90s and crypto since the white paper was published.
Note that crypto proponents love wrapping themselves in the open source mantle - almost all code is MIT-licensed, for example. But that’s just appropriating a cultural shibboleth. The ethos of crypto - artificial digital scarcity - is antithetical to what most people think of when they think of FLOSS.
Sorry but the ethos of crypto(graphy) is communicating without being misunderstood.
We figured out how to build artificial scarcity.. yay! (blegh). Now let’s build methods to manage the problem of “tragedy of the commons” - which is what we actually care about - this comes down to assessing what improves our collective security and by how much (relative to other such improvements). Such assessments are probabilistic and will be built on a social contract of sharing cryptographic commitments to assessments.
If we decide these assessments have meaning, for example by bridging into the legacy system by calling them “exchange rates” then what we’ll have are currencies that are scarce only in the sense that if you print too much you lose trust and your exchange rates suffer… a scale-free credit system; like the international stage is using to do p2p.
The system can be composed of sovereign individuals joining hands with all sorts of temporary contracts.. what happens if someone doesn’t honor their contract? Their exchange rate suffers. What happens if someone didn’t commit a legitimate improvement? Their exchange rate suffers…. This isn’t the only way to do it. I am just saying: we’ve been played for complete fools and it needs to end.
no dude… here’s the blog post: https://blog.gitea.io/2022/10/open-source-sustainment-and-the-future-of-gitea/
To preserve the community aspect of Gitea we are experimenting with creating a decentralized autonomous organization where contributors would receive benefits based on their participation such as from code, documentation, translations, and perhaps even assisting individual community members with support questions.
this doesn’t make sense if you replace “decentralized autonomous organization” with “the FOSS community.” I’m sorry but it’s definitely cryptocurrency related. any other form of organization that does those things would be less decentralized than the current community of contributors. and I can’t imagine why they would add “autonomous” unless they were referring to DAOs as people currently understand them.
Why can I call FOSS community a DAO? I’m just saying this has precedence.
… they were referring to DAOs as people currently understand them.
Which I can agree is the concerning part.. which is why I mentioned earlier comment for context.
I am happy to be receiving engagement with this discussion and can admit that I am defending something slightly different than what they are doing but I feel that my defense gives adequate grounds as to /why/ they are doing this.
For me what is perplexing is what makes all these otherwise smart people feel like they have to limit the granularity of their discernment to “cryptobro” as soon as the notion of using cryptography to organize the causal part of societal communication is brought up. There is a lot of need for useful tools that can be trusted as well as a system of assessment that can give us confidence in funding those involved in building all this software. Incentives are hard to get right, but that is no reason not to try.. a large part of society is dedicated to that task (politics) and they are not using the best methods we know about.. why not?
I don’t see the connection between what you say is perplexing and what people have said in this thread. DAOs are a blockchain/cryptocurrency thing, which is not just “using cryptography.”
My premise is the literal intent in the phrase decentralized autonomous organization. Even though it currently refers to broken implementations it does not stop us from implementing something sane.
nobody will look through your post history to find a comment that you mention but don’t link to. and what was the point of arguing “decentralized autonomous organization” could refer to something totally different, if you agree that gitea is using it in the normal way?
I don’t see the connection between what you say is perplexing and what people have said in this thread.
Problem ist, the term „DAO“ is now burnt. Just like „web3“ and „crypto“. No amount of explanations will be able to revert this.
Explanations maybe not, implementations definitely, the word “crypto” is only temporarily burnt.. you’ll see.
In which sense do you ask that question? Are you referring to the viability of doing so due to project size and complexity?
Other than that, if the code is MIT licensed, there’s no issue in forking. I don’t know about a governing body, a person can take initiative individually if they so wish.
Thankfully I’m tech-savvy enough to know what I’m doing. The two Macs I use most are still on High Sierra and Mojave, and haven’t received security updates in quite a while. I haven’t had any security-related problem whatsoever.
No security related problems he noticed. What a naive statement.
That said, I tend to agree with the parts about unnecessary mixing of iOS and macOS.
*M1 or M2 mac required.
Please someone with a Mac tell us how well this works.
I guess that’s no worse than the original project requiring a Nvidia GPU, not putting that in the requirements and only telling me until I try to run it :( Currently trying some other stuff to get it to work on my AMD GPU.
Just tried it. Similarly to reply above, works exceptionally well on the M1 MPB. It does require an initial internet connection to download about 4.5 GBs of weights, which makes sense.
Works flawlessly on my M1 MBP. It’s also one of the few applications that have a noticeable impact on battery life & generated heat.
some examples of stuff I managed to generate with it https://twitter.com/yogthos/status/1571976393168388098 https://twitter.com/yogthos/status/1571884333618561031
I lock her account down, and make sure everything is installed through brew/cask, like I do on my daily driver with dnf
Just keep in mind that you will have to manually install all updates. brew/cask
don’t auto-update anything.
I certainly hope that the author’s original post is poorly phrased, because it sounds like a fucked up thing to do. And also an unnecessary layer of complexity and maintenance that force them “to lock their partner’s account down”
I think there are some missing context, judging from all the answers I got which were all really helpful, people assume my partner was a programmer. But for clarification, my partner is not very technology literate. She used to install random .dmg
s from the internet. I mostly use cask
to install stuff which is not in the App Store. It’s mostly Firefox, Skype and others. Firefox takes care of its own update, and I couldn’t find a way to install it from the App Store. (But I’m not a Mac person)
Her account is not fully locked down, she has the admin password. And I also set up and taught her some basic security hygiene. (= I bought her a security key, and told her to only use bitwarden generated passwords and store everything in bitwarden)
But I’m always open to any suggestion to do things better. What am I doing wrong here?
I don’t think you’re doing anything wrong (besides the ambiguous phrasing) - to me it sounded just like the thing one does to non-technical family members’ machines if they agree and you’re available to be pestered.
I used some of these packages for a few (cycling-related) tools I’ve been working on. Super easy to use & self-explanatory.
i have been using their gpx crate to aggregate data from my cycling trips and visualize them. it is a delight to work with.
I am cursed with having to work with DICOM modalities and their view on how DICOM (the protocol) should work. It’s truly impressive how many different quirks and out-of-spec behaviors you encounter with different vendors…
Our DICOM-to-DicomWeb proxy must have the most special-case/lines-of-code ratio in the whole institute.
I remember when a developer at a previous job (dealing with DICOM images) discovered that little Bobby Tables could do his work in our product as well. It was an exciting and scary day. :-)
Oof, I’m sorry.
I remember the first time I got some DICOM files (my knees being shot it was an MRI or bone scan of them). The software that came on the DVD was terrible, and I was all “how complicated can this format be? It’s basically just a bitmap right?”. After a few hours trying to find correct format docs and discovering the myriad random extensions I took the “screw this” path :)
I guarantee you that software is still as bad as it was back then. :-) If you’re on macOS, OsiriX1 is one of the best viewers available.
Yes, please. nrepl is rock-solid when it’s working, but really finicky to set up in all but the most standard setups.
As for parallel-eval, there are two things to keep in mind: Clients need to be able to (reliably) cancel computations as well as specify timeouts after which a computation is cancelled by the server. Both are important to break out of accidental (while true nil)
.
One thing the author seemingly hasn’t tried is using a manufacturer-independent gateway for Zigbee like https://www.zigbee2mqtt.io. This approach uses a USB-to-Zigbee adapter and makes all Zigbee devices from all vendors talk to each other. Run on a Raspberry Pi together with HomeAssistant (general-purpose home automation) or Homebridge (to bridge to Apple HomeKit), this setup provides a great user experience (for me).
It also opens two common and documented APIs for interfacing with the devices: HomeAssistant (high-level) and MQTT (lower-level, still easy).
I personally run Zigbee2mqtt with HomeBridge and control everything from the Apple ecosystem. It works flawlessly for me and my family. From the nerd-side, I stream all MQTT messages as JSON to a Postgres database and use Grafana to plot various metrics from sensors and devices.
Personally, I don’t like zigbee2mqtt. However, it is the best solution for vendor-independent gateway. I’m looking on zigbee-lua that uses Lua instead of Javascript and other alternatives.
I agree - the amount of code and complexity of the Javascript in zigbee2mqtt is staggering. Thanks for pointing me to zigbee-lua - looks great.
In an ideal world, there would be a standardised mqtt protocol definition with zigbee2mqtt and others implementing that protocol. From a short look it’s not entirely clear if zigbee-lua tries to be wire-compatible with zigbee2mqtt.
In an ideal world, there would be a standardised mqtt protocol definition with zigbee2mqtt and others implementing that protocol.
Absolutely agree.
As far as I get it right, the main complexity of zigbee2mqtt are quirks for different Zigbee devices that doesn’t conform to Zigbee specification. For interaction with Zigbee devices, z2m uses zigbee-herdsman-converters to parse messages to and from devices. Adding support of a new Zigbee device to z2m is actually an implementation of a new converter that understands and processes messages from the new device.
There is alternative of zigbee-herdsman-converters written in Python - zha-device-handlers. It uses zigpy for access to Zigbee messages, and it is used by Zigbee plugin for Home Assistant. zha-device-handlers contains a huge number of quirks for Zigbee devices, see subdirectories in the zhaquirks directory. zha-device-handlers has a great explanation of quirks for Zigbee devices and it is worth to read it - https://github.com/zigpy/zha-device-handlers#what-the-heck-is-a-quirk
I’d never heard of zigbee-lua, but it looks pretty stale compared to zigbee2mqtt. I don’t find the implementation language anything more than a implementation detail, especially not when typically running in a container.
I’ve used z2m on a raspberry pi and homeassistant on another. I moved to ZHA (the native Home assistant implementation) only by accident and was too lazy to start over again. But I’ll go back to z2m again after moving houses soon.
I have tried using a USB-to-zigbee adapter, but with custom software and not zigbee2mqtt.
Maybe the experience would have been better with zigbee2mqtt, but I generally like building my own stuff. From that perspective, the zigbee stack is not great, and my USB-to-zigbee interfered with the IKEA tradfri gateway pairing process.
I hope more modern smart home standards result in better ecosystems, but I’ll stick to the vendor gateways for now :)
The software mine came with was absolutely horrendous - which is usually the case for such systems. I updated the stick’s software and then uninstalled it :-)
The great thing is, that Zigbee2mqtt makes building your own stuff incredibly nice - you just start at a different level - MQTT instead of the stick’s serial interface. Zigbee2mqtt is the gateway and provides an API (via MQTT) to control devices in your Zigbee network.
That it will interfere with other gateway’s when both are open for devices to join is expected - but that’s expected. Multiple separate zigbee networks in the same house work fine. Devices just get confused which network to join when multiple gateways are in pairing mode.
In my use case the only gateway used is the usb stick used by zigbee2mqtt, which removes the need for all other Gateways, thus freeing you from having to use a different gateway for each vendor, and putting you much more in control of the stack.
Well, there’s always Matter which should come out sometime this year. I’ll keep on using Home Assistant until Matter makes sense to switch to - and then maybe. Home Assistant is pretty nice, not having to care what ecosystem my stuff is in and just interact with all of them instead.
I agree with most people here - just use with
.
However, in some situations, I also found another pattern ergonomic:
def step1({:ok, data}), do: {:ok, data+1}
def step1(fallthrough), do: fallthrough
def step2({:ok, data}), do: {:ok, data*42}
def step2(fallthrough), do: fallthrough
def step3({:ok, data}), do: {:ok, to_string(data)}
def step3(fallthrough), do: fallthrough
def process(data) do
data
|> step1()
|> step2()
|> step3()
end
I’m not into keeping up, but I do like to make stuff occasionally, and stay away from web because it’s too painful finding out how to do anything without being immersed in it day to day.
I’ve used Phoenix LiveView (for Elixir) recently, however, to make a Wordl-alike. It was a great experience and I didn’t have to write any JavaScript! I still had to write some CSS but it has variables now (you can see how little I keep up) and that made life a lot easier.
I just want to highlight how great the LiveView concept is. As a gist: HTML is generated on the server and synced with the browser via diffing. The server is the authority, and all events are sent to the server for processing.
That approach solves a great deal of issues: No need to write REST API endpoint for every feature, no need to serialise data structures, no need to introduce a separate build for the frontend, easy testing (no need to spawn browsers), …
Those are indeed substantial advantages. What about offline SPAs? Is there any way to keep the app running if there’s no network connection?
No because it’s all requests to the server. I’ve seen some attempts to get a BEAM implementation to run in the browser, however (I think via web assembly) and if this becomes possible and feasible (small download, fast startup, low memory overhead) then I look forward to server side rendering from client side code!
I have definitely found that it’s not working as snappily as I’d hoped when accessed by mobile phone. Not being a web developer I have no idea how to diagnose this. I thought I might see lag due to latency, but all is well on a laptop (on the same network) so I’m guessing it could be grunt required in the browser, which is a little disappointing.
Game here: https://fivelettrs.fly.dev/
It’s pretty snappy on my iPhone, but it’s a recent model.
Looking at the client-server communication it seems like your template sends all “dynamic” strings on every model change. Clicking on a character gets this response from the server:
["4","6","lv:phx-FtAFezJXYPUlsRqR","phx_reply",{"response":{"diff":{"1":{"d":[["guess-row ",{"d":[["guess pending","r"],["guess pending current"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}],["guess-row",{"d":[["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}],["guess-row",{"d":[["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}],["guess-row",{"d":[["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}],["guess-row",{"d":[["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}],["guess-row",{"d":[["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "],["guess pending"," "]],"s":0}]],"p":{"0":["\n <span class=\"","\">","</span>\n"]}},"2":{"0":{"d":[[" id=\"q\"","key pending"," phx-value-key=\"q\"","q"],[" id=\"w\"","key pending"," phx-value-key=\"w\"","w"],[" id=\"e\"","key pending"," phx-value-key=\"e\"","e"],[" id=\"r\"","key pending"," phx-value-key=\"r\"","r"],[" id=\"t\"","key pending"," phx-value-key=\"t\"","t"],[" id=\"y\"","key pending"," phx-value-key=\"y\"","y"],[" id=\"u\"","key pending"," phx-value-key=\"u\"","u"],[" id=\"i\"","key pending"," phx-value-key=\"i\"","i"],[" id=\"o\"","key pending"," phx-value-key=\"o\"","o"],[" id=\"p\"","key pending"," phx-value-key=\"p\"","p"]]},"1":{"d":[[" id=\"a\"","key pending"," phx-value-key=\"a\"","a"],[" id=\"s\"","key pending"," phx-value-key=\"s\"","s"],[" id=\"d\"","key pending"," phx-value-key=\"d\"","d"],[" id=\"f\"","key pending"," phx-value-key=\"f\"","f"],[" id=\"g\"","key pending"," phx-value-key=\"g\"","g"],[" id=\"h\"","key pending"," phx-value-key=\"h\"","h"],[" id=\"j\"","key pending"," phx-value-key=\"j\"","j"],[" id=\"k\"","key pending"," phx-value-key=\"k\"","k"],[" id=\"l\"","key pending"," phx-value-key=\"l\"","l"]]},"2":"key backspace","3":{"d":[[" id=\"z\"","key pending"," phx-value-key=\"z\"","z"],[" id=\"x\"","key pending"," phx-value-key=\"x\"","x"],[" id=\"c\"","key pending"," phx-value-key=\"c\"","c"],[" id=\"v\"","key pending"," phx-value-key=\"v\"","v"],[" id=\"b\"","key pending"," phx-value-key=\"b\"","b"],[" id=\"n\"","key pending"," phx-value-key=\"n\"","n"],[" id=\"m\"","key pending"," phx-value-key=\"m\"","m"]]},"4":"key enter disabled"}}},"status":"ok"}] 1643819914.7385468
…which is way too much data for a single-cell change. I’m guessing that slower JS engines may feel sluggish when consolidating the Virtual-DOM in the browser with the changes from the server (LiveView doesn’t directly updates the Dom but uses a virtual-dom library internally as an optimization).
Thankfully, that shouldn’t be too hard to fix (as all these Dom patches are unnecessary). My guess is that your #guesses
div
is re-rendered fully for every change because the change-tracking of the HEEx
template isn’t working correctly. Without the code one can only guess why. A first start is looking at Change Tracking Pitfalls.
I’ll have a look, thanks. It improved somewhat after I took some advice on how to render the (dynamic) grid and keyboard in a way that was HEEx-friendly, but looking at the size of that diff I don’t think it’s quite right yet!
In case you’re interested in seeing my rather fumbling first attempt at liveview: here’s the repo and here’s a direct link to the the heex, liveview and the game struct/code
After some pain, I’ve concluded that LiveView can’t do the change tracking needed if I am using a single ‘game’ struct, and therefore I’m doing what I feared necessary: an ‘assign’ for every visible grid ‘tile’ and keyboard ‘key’.
Small diffs but nasty HEEx and code!
If I remember correctly HEEx
can track inside structs. So the culprit may be the calls to functions that get passed the whole @game
assign. I think you can rewrite your Game
struct to contain the required data without having to go through struct members you should get good performance too. You could also introduce a GameStateView
(or similar) struct which contains the data in a format ready for use in the template.
Today I was being polite and held the door for someone, but they looked like they wouldn’t be paying me for doing that, so I slammed the door at them and broke their fucking face. I can’t believe anybody is endorsing this behavior.
Exactly. There are two very different scenarios:
I hold the door open for you but my hand slips and it shuts on you and you get hurt. This is the kind of scenario where the disclaimer of warranty in most open source licenses covers: you made a best-effort attempt to do the nice thing, you failed, and someone got hurt.
I hold the door open for you and then slam it in your face as you get here. This is actively malicious behaviour that causes actual harm. In the metaphor, this would likely be covered by something like actual bodily harm. In the scenario in the article, this is likely covered by computer misuse laws. No disclaimer of warranty protects you.
Can you un-publish a library on npm, and what happens if you do? If so, a third scenario would appear:
I stop opening the door for you and you run into it because you expected me to open it.
Un-publish in NPM has rules. If the door is not used that much (or just built in yesterday), you can leave it closed as you want.
It is in fact what happened back in 2016 with the infamous left-pad incident.
Note that Rust does this automatically for you since Rust 1.18, released in 2017. By coincidence, the example case used, (u8, u16, u8), is exactly same in the Rust release note.
As far as I know, there is nothing in the spec that guarantees struct order layout, and the only way to really observe it is to use the unsafe package which is not covered by the Go 1 compatibility promise. So technically it could.
That said, changing it now would break too much code that uses structs with cgo/assembly/syscalls, so I doubt it will happen any time soon. If it ever does, I’d expect it will come with a similar annotation to Rust’s #[repr(C)]
, at the very least.
Here’s an issue where these things have been considered: https://github.com/golang/go/issues/36606
That seems rather brutal for binary compatibility or when interacting with C. I assume it can be turned off on a struct by struct basis?
And all the details (and other representations) are documented: https://doc.rust-lang.org/reference/type-layout.html#representations
For hardware or artificial-scarcity fetishists I guess this is really exciting. For everyone else, I don’t know why anyone would waste donate money for something which we can perfectly emulate on about every platform and architecture you can imagine. These emulators also have the benefit of allowing you to use digital copies of the games instead of increasingly rare and limited cartridges.
I don’t think the “rare and limited” argument holds. There are things like the Everdrive which emulate NES cartridges.
This debate is also held for pretty much anything that’s old and collectable: After restoring an old motorcycle, should you ride it or put it in a display case to look at?
It should be possible to make novel hardware cartridges too, although perhaps more difficult to legally sell them for copyright reasons (but you could imagine selling a generic NES cartridge that reads data from a SD card, leaving it to the end user to be the one to violate Nintendo’s copyright by downloading the NES roms and putting them on the SD card).
Krikzz (https://krikzz.com) has made an industry out of this very idea.
SQLite is my go-to for small to medium size webapps that could reasonably run on a single server. It is zero effort to set up. If you need a higher performance DB, you probably need to scale past a single server anyway, and then you have a whole bunch of other scaling issues, where you need a web cache and other stuff anyway.
Reasons not to do that are handling backup at a different place than the application, good inspection tools while your app runs, perf optimization things (also “shared” memory usage with one big dbms instance) you can’t do in sqlite and the easier path for migrating to a multi-machine setup. Lastly you’ll also get separation of concerns, allowing you to split up some parts of your up into different permission levels.
Regarding backups: what’s wrong with the .backup command
If I’m reading that right you’ll have to implement that into your application. postgres/mariadb can be backed up (and restored) without any application interaction. Thus it can also be performed by a specialized backup user (making it also a little bit more secure).
As far as I know, you can use the sqlite3 CLI tool to run .backup while your application is still running. I think it’s fine if you have multiple readers while one process is writing to the DB.
You could use litestream to stream your SQLite changes to local and offsite backups. Works pretty well.
Ok but instead of adding another dependency that solves the shortcomings of not using a DBMS (and I’ll also have to care about) I could instead use a DBMS.
OK, but then you need to administer a DBMS server, with security, performance, testing, and other implications. The point is that there are tradeoffs and that SQLite offers a simple one for many applications.
Not just that, but what exactly are the problems that make someone need a DBMS server? Sqlite3 is thread safe and for remote replication you can just use something like https://www.symmetricds.org/, right? Even then, you can safely store data up to a couple of terabytes in a single Sqlite3 server, too, and it’s pretty fault tolerant by itself. Am I missing something here?
What does a “single sqlite3 server” mean in the context of an embedded database?
How do you run N copies of your application for HA/operational purposes when the database is “glued with only one instance of the application”?
It’s far from easy in my experience.
My experience has been that managing Postgres replication is also far from easy (though to be fair, Amazon will now do this for you if you’re willing to pay for it).
SymmetricDS supports many databases and can replicate across different databases, including Oracle, MySQL, MariaDB, PostgreSQL, MS SQL Server (including Azure), IBM DB2 (UDB, iSeries, and zSeries), H2, HSQLDB, Derby, Firebird, Interbase, Informix, Greenplum, SQLite, Sybase ASE, Sybase ASA (SQL Anywhere), Amazon Redshift, MongoDB, and VoltDB databases.
This seems quite remarkable - any experience with it?
Where do you see the difference between litestream and a tool to backup Postgres/MariaDB? Last time I checked my self-hosted Postgres instance didn’t backup itself.
You have a point but nearly every dbms hoster has automatic backups and I know many backup solutions that automate this. I am running stuff only by myself though (no SaaS)
No, it’s fine to open a SQLite database in another process, such as the CLI. And as long as you use WAL mode, a writer doesn’t interrupt a reader, and a reader can use a RO transaction to operate on a consistent snapshot of the database.
I wonder how the applicants had to submit/present their solution. I’m not sure if I’d manage to write a syntactically correct & compiling FizzBuzz in C or so on a whiteboard. Is a missing ;
already a reason to fail the test?
I think the whole procedure is to filter out people who aren’t able to think like a programmer. I wouldn’t sort someone out who’d make syntax errors, but otherwise has the right logic applied to their solution, because this is where you can’t get help. The compiler will tell you a syntactical or semantical error, but never logic one
I did have one interview where I literally said something like “I don’t remember if it was .strip() or .trim() or whatever” and it was marked down as an error, so I guess the spectrum allows for “you missed a ; here”
That sounds like a great interview. You learned that the company valued memorising details that are trivial to look up over thinking skills.
Interviews are a bidirectional communication channel. I don’t know what information was transferred towards them but they efficiently communicated that they’re not the kind of place you’d want to work.
I had an interview where someone asked about inner vs. outer join and I said, “Oh, I just google that every time.” Didn’t get the job. :-)
That’s a good way of telling someone who writes a little bit of sql from someone who writes it regularly. Whether or not that’s relevant information is another question.
Yeah, or someone who uses an ORM or someone who avoids creating tables that require JOINS or …
I think part of what throws me off about the question is that I never use either by that name. I use LEFT JOIN (which is one kind of inner join) pretty much exclusively, although I did have cause for a CROSS JOIN the other day.
Looks like the employee is based in the UK. As you might expect, most of the responses to his announcement are Bad Legal Advice. This comment is also going to be Bad Legal Advice (IANAL!) but I have some experience and a little background knowledge so I hope I can comment more wisely…
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time. Opinions I’ve heard from solicitors and employment law experts suggest that this practice might constitute an over-broad, “unfair”, contract term under UK law. That means you might be able to get it overturned if you really tried, but you’d have to litigate to resolve it. At any rate the de facto status is: they own it by default.
What employees typically do is seek an IP waiver from their employer where the employer disclaims ownership of the side-project. The employer can refuse. If you’ve already started they could take ownership, as apparently is happening in this case. Probably in that scenario what you should not do is try to pre-emptively fork under some idea that your project is FOSS and that you have that right. The employer will likely take the view that because you aren’t the legal holder of the IP that you aren’t entitled to release either the original nor the fork as FOSS - so you’ve improperly releasing corporate source code. Pushing that subject is an speedy route to dismissal for “gross misconduct” - which a sufficient reason for summary dismissal, no process except appeal to tribunal after the fact.
My personal experience seeking IP waivers, before I turned contractor (after which none of the above applies), was mixed. One startup refused it and even reprimanded me for asking - the management took the view that any side project was a “distraction from the main goal”. Conversely ThoughtWorks granted IP waivers pretty much blanket - you entered your project name and description in a shared spreadsheet and they sent you a notice when the solicitor saw the new entry. They took professional pride in never refusing unless it conflicted with the client you were currently working with.
My guess is that legal rules and practices on this are similar in most common law countries (UK, Australia, Canada, America, NZ).
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time.
This seems absurd. If I’m a chef, do things I cook in my kitchen at home belong to my employer? If I’m a writer do my kids’ book reports that I help with become privileged? If I’m a mechanic can I no longer change my in-laws’ oil?
Why is software singled out like this and, moreover, why do people think it’s okay?
There have been cases of employees claiming to have written some essential piece of software their employer relied on in their spare time. Sometimes that was even plausible, but still it’s essentially taking your employer hostage. There have been cases of people starting competitors to their employer in their spare time; what is or is not competition is often subject to differences of opinion and are often a matter of degree. These are shadow areas that are threatening to business owners that they want to blanket prevent by such contractual stipulations.
Software isn’t singled out. It’s exactly the same in all kinds of research, design and other creative activities.
There have been cases of people starting competitors to their employer in their spare time;
Sounds fine to me, what’s the problem? Should it be illegal for an employer to look for a way to lay off employees or otherwise reduce its workforce?
what’s the problem?
I think it’s a pretty large problem if someone can become a colleague, quickly hoover up all the hard won knowledge we’ve together accumulated over the past decade, then start a direct competitor to my employer, possibly putting me out of work.
You’re thinking of large faceless companies that you have no allegiance to. I’m thinking of the two founders of the company that employs me and my two dozen colleagues, whom I feel loyal towards.
This kind of thing protects smaller companies more than larger ones.
…start a direct competitor to my employer, possibly putting me out of work.
Go work for the competitor! Also, people can already do pretty much what you describe in much of the US where non-competes are unenforceable. To be clear, I think this kind of hyper competitiveness is gross, and I would much rather collaborate with people to solve problems than stab them in the back (I’m a terrible capitalist). But I’m absolutely opposed to giving companies this kind of legal control over (and “protection” from) their employees.
Go work for the competitor!
Who says they want me? Also I care for my colleagues: who says they want them as well?
where non-competes are unenforceable
Overly broad non-competes are unenforceable when used to attempt to enforce against something not clearly competition. They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
opposed to giving companies this kind of legal control
As I see it we don’t give “the company” legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us. We’re not protecting some nameless company: we’re protecting our livelihood.
And please note that my employer does waive rights to unrelated side projects if you ask them, waives rights to contributions to OSS, etc. Also note that non-compete restrictions are only for a year anyway.
Who says they want me? Also I care for my colleagues: who says they want them as well?
Well then get a different job, get over it, someone produced a better product than your company, that’s the whole point of capitalism!
They are perfectly enforceable if you start working for, or start, a direct competitor, profiting from very specific relevant knowledge.
Not in California, at least, it’s trivially easy to Google this.
As I see it we don’t give “the company” legal control: we effectively give humans, me and my colleagues, legal control over what new colleagues are allowed to do, in the short run, with the knowledge and experience they gain from working with us.
Are you a legal party to the contract? If not, then no, it’s a contract with your employer and if it suits your employer to use it to screw you over, they probably will.
I truly hope that you work for amazing people, but you need to recognize that almost no one else does.
Even small startups routinely screw over their employees, so unless I’ve got a crazy amount of vested equity, I have literally zero loyalty, and that’s exactly how capitalism is supposed to work: the company doesn’t have to care about me, and I don’t have to care about the company, we help each other out only as long as it benefits us.
Go work for the competitor?
Why would the competitor want/need the person they formerly worked with/for?
Why did the original company need the person who started the competitor? Companies need workers and if the competitor puts the original company out of business (I was responding to the “putting me out of work” bit) then presumably it has taken on the original company’s customers and will need more workers, and who better than people already familiar with the industry!
Laying off and reducing the workforce can be regulated (and is in my non-US country). The issue with having employees starting competitor products is that they benefit from an unfair advantage and create a huge conflict of interest.
Modern Silicon Valley began with employees starting competitor products: https://en.wikipedia.org/wiki/Traitorous_eight
If California enforced non-compete agreements, Silicon Valley might well not have ended up existing. Non-enforcement of noncompetes is believed to be one of the major factors that resulted in Silicon Valley overtaking Boston’s Route 128 corridor, formerly a competitive center of technology development: https://hbr.org/2016/11/the-reason-silicon-valley-beat-out-boston-for-vc-dominance
I don’t think we are talking about the same thing. While I agree that any restriction on post-employment should be banned, I don’t think it is unfair for an organization to ask their employees to not work on competing products while being under their payroll. These are two very different situations.
If the employee uses company IP in their product then sure, sue them, that’s totally fair. But if the employee wants to use their deep knowledge of an industry to build a better product in their free time, then it sucks for their employer, but that’s capitalism. Maybe the employer should have made a better product so it would be harder for the employee to build something to compete with it. In fact, it seems like encouraging employees to compete with their employers would actually be good for consumers and the economy / society at large.
An employee working on competing products on its free time creates an unfair advantage because the employees have access to an organization IP to build its new product while the organization does not have access to the competing product IP. So what’s the difference between industrial espionage and employees working on competing products on their free time?
If the employee uses company IP in their product then sure, sue them, that’s totally fair.
That was literally in the comment you responded to.
Joel Spolsky wrote a piece that frames it well, I think. I don’t personally find it especially persuasive, but I think it does answer the question of why software falls into a different bucket than cooking at home or working on a car under your shade tree, and why many people think it’s OK.
Does this article suggest the employers view contracts as paying for an employee’s time, rather than just paying for their work?
Could a contract just be “in exchange for this salary, we’d like $some_metric of work”, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
Why should an employer care about extra work being released as FOSS if they’ve already received the amount they paid an employee for?
EDIT: I realise now that $some_metric is probably very hard to define in terms of anything except number of hours worked, which ends up being the same problem
Does this article suggest the employers view contracts as paying for an employee’s time, rather than just paying for their work?
I didn’t read it that way. It’s short, though. I’d suggest reading it and forming your own impression.
Could a contract just be “in exchange for this salary, we’d like $some_metric of work”, with working hours just being something to help with management? It seems irrelevant when you came up with something, as long as you ultimately give your employer the amount of work they paid you for.
I’d certainly think that one of many possible reasonable work arrangements. I didn’t link the article intending to advocate for any particular one, and I don’t think its author intended to with this piece, either.
I only linked it as an answer to the question that I read in /u/lorddimwit’s comment as “why is this even a thing?” because I think it’s a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
Why should an employer care about extra work being released as FOSS if they’ve already received the amount they paid an employee for?
As a general matter, I don’t believe they should. One reason I’ve heard given for why they might is that they’re afraid it will help their competition. I, once again, do not find that persuasive personally. But it is one perceived interest in the matter that might lead an employer to negotiate an agreement that precludes releasing side work without concurrence from management.
I only linked it as an answer to the question that I read in /u/lorddimwit’s comment as “why is this even a thing?” because I think it’s a plausible and cogent explanation of how these agreements might come to be as widespread as they are.
I think so too, and hope I didn’t come across as assuming you (or the article) were advocating anything that needs to be argued!
I didn’t read it that way. It’s short, though. I’d suggest reading it and forming your own impression.
I’d definitely gotten confused because I completely ignored that the author is saying that the thinking can become “I don’t just want to buy your 9:00-5:00 inventions. I want them all, and I’m going to pay you a nice salary to get them all”. Sorry!
There is a huge difference: We’re talking about creativity and invention. The company isn’t hiring your for changing some oil or swapping some server hardware. They’re hiring you to solve their problems, to be creative and think of solutions. (Which is also why I don’t think it’s relevant how many hours you actually coded, the result and time you thought about it matters.) Your company doesn’t exist because it’s changing oil, the value is in the code (hopefully) and thus their IP.
So yes, that’s why this stuff is actually different. Obviously you want to have exemptions from this kind of stuff when you do FOSS things.
I think the chef and mechanic examples are a bit different since they’re not creating intellectual property, and a book report is probably not interesting to an employer.
Maybe a closer example would be a chef employed to write recipes for a book/site. Their employer might have a problem with them creating and publishing their own recipes for free in their own time. Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company. I can see it happening for other IP that isn’t software, although I don’t know if it happens in reality.
I think the “not interesting” bit is a key point here. I have no idea what Bumble is or the scope of the company, and I speak out of frustration of these overarching “legal” restrictions, but its sounds like they are an immature organization trying to hold on to anything interesting their employees do, core to the current business, or not, in case they need to pivot or find a new revenue stream.
Frankly if a company is so fearful that a couple of technologies will make make or break their company, their business model sucks. Technology != product.
Similarly, maybe a writer could get in trouble for independently publishing things written in their own time while employed to write for a company
I know of at least one online magazine’s contracts which forbid exactly this. If you write for them, you publicly only write for them.
This is pretty much my (non-lawyer) understanding and a good summary, thanks.
If you find yourself in this situation, talk to a lawyer. However I suspect that unless you have deep pockets and a willingness to litigate “is this clause enforceable” through several courts, your best chance is likely to be reaching some agreement with the company that gives them what they want whilst letting you retain control of the project or at least a fork.
One startup refused it and even reprimanded me for asking - the management took the view that any side project was a “distraction from the main goal”
I think the legal term for this is “bunch of arsehats”. I’m curious to know whether you worked for them after they started out like this?
I think the legal term for this is “bunch of arsehats”.
https://www.youtube.com/watch?v=Oz8RjPAD2Jk
I’m curious to know whether you worked for them after they started out like this?
I left shortly after for other reasons
The way FOSS (and indeed all private-time) software development works here for employees is that according to your contract your employer will own everything you create, even in your private time
Is it really that widespread? It’s a question that we get asked by candidates but our contract is pretty clear that personal-time open source comes under the moonlighting clause (i.e. don’t directly compete with your employer). If it is, we should make a bigger deal about it in recruiting.
I would think the solution is to quit, then start a new project without re-using any line of code of the old project - but I guess the lawyers thought of this too and added clauses giving them ownership of the new project too…
1Password on macOS can act as SSH agent