When programmers discuss the problems caused by leap seconds, they usually agree that the solution is to “just” keep time in TAI, and apply leap second adjustments at the outer layers of the system, in a similar manner to time zones.
Abolishing leap seconds will be helpful for users that have very tight accuracy requirements. UTC is the only timescale that is provided by national laboratories in metrologically traceable manner, i.e. in a way that provides real-time access, and allows users to demonstrate exactly how accurate their timekeeping is.
TAI is not directly traceable in the same way: it is only a “paper” clock, published monthly in arrears in Circular T as a table of corrections to the various national timescales. (See the explanatory supplement for more details).
The effect of this is that users who require high-accuracy uniformity and traceability have to implement leap seconds to recover a uniform timescale from UTC - not the other way round as the programmers’ falsehood would have it.
“Disseminate TAI (along with the current leap offset) and implement the leap seconds at the point of use” might be a “programmers’ falsehood” but it’s also what 3 out of 4 GNSS systems actually do, so it has something going for it.
The fact that UTC (and not TAI-like timescales) is blessed for dissemination and traceability is downstream of ITU’s request that only UTC should be broadcast; not because of any technical difficulty with disseminating/processing traceable TAI-like timescales:
GNSS system times are not representations of UTC, and being broadcast they are not fulfilling requests of ITU, which is recommending only UTC to be broadcast
GNSS system times shall be considered as internal technical parameters used only for internal system synchronization, but this is not the case.
The time that comes out of an atomic clock looks like TAI, adding the leap seconds is something that has to come afterwards, and a system designer gets to choose when. Leap seconds don’t factor into calculations involving the precise timing of moving objects, whether the objects are airplanes, the angles/phases of generators in a power grid, particles in an accelerator, RF signals, etc. Unless you’re OK with grievously anomalous results whenever there’s a leap second, you want a timescale that looks like looks like TAI, not one that looks like UTC. Why bake the leap seconds into your timescale early on if you’re going to need to unbake them every time you calculate with times?
The designers of GPS, BeiDou, and Galileo all wisely flout this ITU recommendation: their constellations broadcast a TAI-like timescale (alongside the current leap second offset). The designers of PTP also flout this recommendation – by default, the timescale for PTP packets is TAI (usually derived from a GNSS receiver), not UTC. Should you want to reconstitute a UTC time, there is a currentUtcOffset field in PTP timestamps, whose value you can add to a TAI time.
This “disseminate UTC only” ITU recommendation has been at fundamental odds with real-world PNT systems ever since GPS was known as “NAVSTAR GPS”.
Except GLONASS does disseminate UTC, including leap seconds.
If you want your time signal to be broadly useful for celestial navigation, UTC is the way to go as it’s (for now) guaranteed to be within 1s of UT1. I believe that’s where the ITU’s recommendation comes from. That said, it’s probably time for this usage application to take a step back compared to the broader issues caused by leap seconds.
You aren’t really arguing against what I said, because my “falsehood” did not talk about disseminating TAI along with the UTC offset (that isn’t “just” TAI). Those paragraphs were really an introduction to the next section where I explain that systems can’t avoid working with UTC. And GPS and PTP do not avoid working with UTC: as you said, they tackle its awkwardness head-on.
The way GPS handles UTC is even more complicated, though. Take a look at the GPS interface specification, in particular IS-GPS-200 section 20.3.3.5.2.4 (sic!) where it specifies that UTC is not “just” GPS time plus the leap second offset: there are also parameters A_0 and A_1 that describe a more fine-grained rate and phase adjustment. Section 3.3.4 says more about how GPS time relates to UTC:
The OCS shall control the GPS time scale to be within one microsecond of UTC (modulo one second). The LNAV/CNAV data contains the requisite data for relating GPS time to UTC. The accuracy of this data during the transmission interval shall be such that it relates GPS time (maintained by the MCS of the CS) to UTC (USNO) within 20 nanoseconds (one sigma).
This is (basically) related to the fact that atomic clocks need to be adjusted to tick at the same rate as UTC owing to various effects, including special and general relativity, e.g. NIST and the GPS operations centre are a mile high in Colorado, so their clocks tick at a different rate to the USNO in Washington DC, and a different rate from the clocks in the satellites whizzing around in orbit.
And you are right that UTC is a spectacularly poor design for a reference timescale, hence the effort to get rid of leap seconds.
Along these lines, see David Reed’s memories of UDP “design”, where he notes that he and Steven Kent argued unsuccessfully for mandatory end-to-end encryption at the protocol layer circa 1977.
Ahhh nice, it actually addresses the UEFI “runtime services” (does it talk about all the ACPI horrors that Linux runs in the ACPI bytecode interpreter? Idk) as well as SMM/ME. Frankly, this level of complexity in platform firmware is terrifying, how can anyone have a hope of building a secure/trustworthy system with x86 if all this is in play?
The fundamental problem with USB-C is also seemingly its selling point: USB-C is a connector shape, not a bus. It’s impossible to communicate that intelligibly to the average consumer, so now people are expecting external GPUs (which run on Intel’s Thunderbolt bus) for their Nintendo Switch (which supports only USB 3 and DisplayPort external busses) because hey, the Switch has USB-C and the eGPU connects with USB-C, so it must work, right? And hey why can I charge with this port but not that port, they’re “exactly the same”?
This “one connector to rule them all, with opaque and hard to explain incompatibilities hidden behind them” movement seems like a very foolish consistency.
It’s not even a particularly good connector. This is anecdotal, of course, but I have been using USB Type-A connectors since around the year 2000. In that time not a single connector has physically failed for me. In the year that I’ve had a device with Type-C ports (current Macbook Pro), both ports have become loose enough that simply bumping the cable will cause the charging state to flap. The Type-A connector may only connect in one orientation but damn if it isn’t resilient.
It is much better, but it’s still quite delicate with the “tongue” in the device port and all. It’s also very easy to bend the metal sheeting around the USB-C plug by stepping on it etc.
The perfect connector has already been invented, and it’s the 3.5mm audio jack. It is:
Orientation-free
Positively-locking (not just friction-fit)
Sturdy
Durable
Every time someone announces a new connector and it’s not a cylindrical plug, I give up a little more on ever seeing a new connector introduced that’s not a fragile and/or obnoxious piece of crap.
Audio jacks are horrible from a durability perspective. I have had many plugs become bent and jacks damaged over the years, resulting in crossover or nothing playing at all. I have never had USB cable fail on me because I stood up with it plugged in.
Not been my experience. I’ve never had either USB-A or 3.5mm audio fail. (Even if they are in practice fragile, it’s totally possible to reinforce the connection basically as much as you want, which is not true of micro USB or USB-C.) Micro USB, on the other hand, is quite fragile, and USB-C perpetuates its most fragile feature (the contact-loaded “tongue”—also, both of them unforgivably put the fragile feature on the device—i.e., expensive—side of the connection).
3.mm connectors are not durable and are absolutely unfit for any sort of high-speed data.
They easily get bent and any sort of imperfection translates to small interruptions in the connection when the connector turns. If I – after my hearing’s been demolished by recurring ear infections, loud eurobeat, and gunshots – can notice those tiny interruptions while listening to music, a multigigabit SerDes PHY absolutely will too.
This. USB-A is the only type of usb connector that never failed for me. All B types (Normal, Mini, Micro) and now C failed for me in some situation (breaking off, getting wobbly, loose connections, etc.)
That said, Apple displays their iPhones in Apple Stores solely resting on their plug. That alone speaks for some sort of good reliability design on their ports. Plus the holes in devices don’t need some sort of “tongue” that might break off at some point - the Lightning plug itself doesn’t have any intricate holes or similar and is made (mostly) of a solid piece of metal.
As much as I despise Apple, I really love the feeling and robustness of the Lightning plug.
I’m having the same problem, the slightest bump will just get it off of charging mode. I’ve been listening to music a lot recently and it gets really annoying.
It’s impossible to communicate that intelligibly to the average consumer,
That’s an optimistic view of things. It’s not just “average consumer[s]” who’ll be affected by this; there will almost certainly be security issues originating from the Alternate Mode thing – because different protocols (like thunderbolt / displayport / PCIe / USB 3) have extremely different semantics and attack surfaces.
I don’t want a USB device of unknown provenance to be able to talk with my GPU and I certainly don’t want it to even think of speaking PCIe to me! It speaking USB is frankly, scary enough. What if it lies about its PCIe Requester ID and my PCIe switch is fooled? How scary and uncouth!
Another complication is making every port do everything is expensive, so you end up with fewer ports total. Thunderbolt in particular. Laptops with 4 USB A, hdmi, DisplayPort, Ethernet, and power are easy to find. I doubt you’ll ever see a laptop with 8 full featured usb c ports.
NIHing everything but nevertheless making mistakes that had been understood and fixed decades ago – this is why many people have negative attitudes towards systemd. I’ve been using it for many years but still the insistence on reimplementing things in shitty and incomplete ways (journald is a shoddy half-baked reimplementation of sqlite, say) frustrates me to no end.
15K RPM drives (and probably laptop hard drives) are getting killed off ruthlessly by flash. Nobody will mourn 15K drives but I confess that the end of laptop hard drives will make me sad and nostalgic.
Google presented a paper at FAST16 about the possibility of fundamentally redesigning hard drives to specifically target hard drives that are exclusively operated as part of a very large collection of disks (where individual errors are not such a big deal as in other applications) – in order to even further reduce $/GB and increase IOPS: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44830.pdf .
Possible changes mentioned in the paper include:
New (non-compatible) physical form factor[s] in order to freely change the dimensions of the heads and platters
adding another actuator arm / voice coil with its own set of heads
accepting higher error rates and “flexible” (this is a euphemism for “degrades over time”)
capacity in exchange for higher areal density, lower cost, and better latencies
exposing more lower-level details of the spinning rust to the host, such as host-managed retries and exposing APIs that let the host control when the drive schedules its internal management tasks
better profiling data (time spent seeking, time spent waiting for the disk to spin, time spent reading and processing data) for reads/writes
Caching improvements, such as ability to mark data as not to be cached (for streaming reads) or using PCIe to use the host’s memory for more cache
Read-ahead or read-behind once the head is settled costs nothing (there’s no seek involved!). If the host could annotate its read commands with its optional desires for nearby blocks, the hard drive could do some free read-ahead (if it was possible without delaying other queued commands).
better management of queuing – there’s a lot more detail on page 15 of that PDF about queuing/prioritisation/reordering, including the need for the drive’s command scheduler to be hard real-time and be aware of the current positioning of the heads and of the media. Fun stuff! I sorta wish I could be involved in making this sort of thing happen.
tl;dr there is a lot of room for improvement if you’re willing to throw tradition to the wind and focus on the single application (very large scale bulk storage) where spinning rust won’t get killed off by flash in a decade.
Is it literally rust in some sense, or is that a joke? The platters I’ve seen don’t look like rust, although they don’t look like anything else from the everyday world, either.
The magnetic layer (the part of the sandwich of platter materials/coatings that actually stores the data) used to indeed be iron oxide up to the 1980s but anything after that doesn’t have iron oxides – just a fancy cobalt alloy.
Nowadays it’s just a facetious term (unless you own vintage hard drives, in which case you actually are spinning literal rust).
AFAIK there is still a cost/density argument for spinning disks. That will go away as transistor sizes get smaller, but afaik we’re approaching the limit for how physically small transistors can be, so it may never truly go away.
With flash, AFAIK, shrinking things is hard because the number of electrons you can store decreases with size which, combined with the fact that electrons leak out over time, makes the electron-counting business even hairier.
Besides the “this is the browser’s job to display the top-level domain differently” or whatever comments, how difficult would it be for Let’s Encrypt to be given a list of commonly-phished websites and delay issuance (and notify the real paypal so they can take legal action or quash the domain registration)?
It’s a one line perl script to see if a domain name matches a ban list, but who decides what’s in the ban list? (Who decides who decides what’s in the ban list?)
LE already uses the google safe browsing list, so using the inverse of it (the legitimate websites that are imitated by the entries on the safe browsing list) isn’t really superbly controversial.
Creating an audit log of all certificates that have been delayed/quashed due to this procedure (along with the legal entity responsible) seems also completely doable.
Thanks to certificate transparency, that is unnecessary. Paypal can watch the CT log and “take legal action or quash the domain registration” whenever they feel like it :)
While I actually agree that being a cop in the current social system is unethical (especially in USA) and should be shunned, deflating that just to two words on a technically-minded community seems pointless.
Historically, the institution of police exists to protect property. Whether it be from rioting, exploited workers in the UK, or slave patrols in the US, modern police forces around the world all share these roots. Institutionally, a police force is the brutalizing arm of property’s enforcement, and only act to resolve general social conflict as a secondary function, which is why police forces tend to be filled with politically right wing individuals, with an atrocious tendency towards domestic violence, and often exact violence on minorities and mentally ill individuals, while failing to fulfill community needs in terms of domestic violence, gun violence, drug abuse, and sexual assault.
There are alternatives, but they require radically different community structuring than exist in western society. Sure, to you a “world without cops is unimaginable” makes sense, because the world (/society) that you live in couldn’t possibly govern itself, not in the state it’s in, socially and materially. However, that doesn’t make police a fact of nature, and the exploration of evolving community for self governance a waste of time.
I disagree with your argument in several respects.
The police don’t put any particular emphasis on enforcing property rights. The vast majority of police activity is just revenue-seeking via traffic law enforcement. After that are simple crimes like public intoxication and fighting. Most arrests don’t even lead to charges. “Civil asset forfeiture”, as it is euphemistically called, is literally just theft.
Institutionally, a police force is the brutalizing arm of property’s enforcement
This is wrong. The police force works for a government, not the abstract notion of property. The law usually requires the government to protect property rights to some degree, but this is orthogonal to the fundamental role of the police.
while failing to fulfill community needs in terms of domestic violence, gun violence, drug abuse, and sexual assault.
What, exactly, do you think police should do more of in these instances? Drug abuse we can help via e.g. clean needle programs, but that’s not up to the police.
To be clear, I think the police system in its current form is pretty shit and could be improved in a lot of ways, but your post just seems like directionless communist idealism rather than a coherent critique or idea for improvement.
What, exactly, do you think police should do more of in these instances? Drug abuse we can help via e.g. clean needle programs, but that’s not up to the police.
For drug use, police can help by not arresting (or searching) anyone for any drug-related crime (possession, purchase, sale, manufacture, transport, use, regardless of the drug type or quantity in question), with narrow exceptions like crimes of “dosing someone with drugs without their informed consent” or “driving while being impaired by drugs”.
It’s prohibition that’s the root cause of drug use being harmful – to both the people involved and also to the communities and society in which those people live.
I do think considerable police resources are put into defending the property of the wealthier part of society from the poorer part, and that this explains a good portion of the reason police forces exist and are well-funded. But, yeah, police are also not especially ideologically committed to enforcement of a philosophically grounded libertarian ideal of private property rights or anything. The rampant asset-forfeiture abuse you bring up is a good illustration of that, among others.
I think you could come up with an explanation for this situation that is more rather than less Marxist, though, relating to society being divided into classes, and the police being the hired muscle of one of its classes… i.e. they work for that class specifically, not for the abstract, theoretically equal-handed idea of private property. Although I’m pretty lefty, it’s also worth noting that there are libertarians spending quite a bit of time critiquing the current situation, as well. Folks like Randy Balko have been good in recent years on digging into how the police and the criminal-justice system fail to uphold the stated rights that people in lower socio-economic and minority racial positions are supposed to have.
Better late than never: the more hard-core libertarians, especially of the agorist variety, actually describe the society in class-divide terms. However, they draw the divide in a way that puts agents and workers of the government in the oppressors category, and the rest of the society in the oppressed.
Gave you back the karma that anon took bc nothing you said is itself wrong, but while I’m aware of market anarchism and have my own opinions on it (eg, the market cannot undo the contradictions inherent to the market), what sets SEK3’s agorism apart from Rothbardian market anarchism (which he appears to relate agorism to)?
SEK3 has described a full-fledged class system, taken further than Rothbard’s rulers vs ruled dichotomy. Moreover, SEK3 says salary job and corporations would not exist in Agora, but this I disagree with.
The police force works for a government, not the abstract notion of property. The law usually requires the government to protect property rights to some degree, but this is orthogonal to the fundamental role of the police.
This analysis I’ve conveyed (it’s not my own), doesn’t rely specifically on individual actions of the police; instead, it’s presupposed on the material obligations and systemic relationship of a police force to the state and its people. However, in this analysis it also relies on underlying analysis wrt the state under the capitalist mode of production. That underlying analysis proposes that this state exists to defend the material interests of the ruling class, not out of conspiracy or individual actions, but out of necessity, self preservation. Under this analysis, the police are the domestic force of that state protecting the material interests of that ruling class (primarily property, but also given that commodities and capital are predicated on the ownership of property… PS: my reading on this tidbit could be wrong! I leave those better read on political economy to correct me here). This enforcement can make itself visible in a multitude of ways, including general criminalization that predominantly targets the non-ruling classes, as well as direct policies that directly protect material interests.
What, exactly, do you think police should do more of in these instances? Drug abuse we can help via e.g. clean needle programs, but that’s not up to the police.
This specific critique asserts that structurally, police forces are unequipped to address those community problems. What tools do they have to improve the communities they occupy, besides criminalization?
…your post just seems like directionless communist idealism…
Cool hypothetical analysis, except that the actual evidence I pointed out strongly suggests that the police actually aren’t all that hot on property rights, as you claimed they were.
I don’t think any idealist ever claimed to be an idealist.
I’m sorry, I think I was unclear on this. I’m not discussing “property rights”, but “property as the material interests of the ruling class”. I don’t think I’d disagree with you that police aren’t so concerned about property rights (or human rights, in some absurd and obscene cases), especially given the bullshit that is civil forfeiture.
I think you missed the key word in the original post - “Historically”.
Think about the period where societies transitioned from not having a police force to having a police force. Who made that call? Why did they make it?
By my reading (feel free to dispute it), most police forces were initially formed by a ruling class because each maintaining private security for their assets was getting expensive.
An example (without backstory of course, just a quick timeline): a cop shoots a child and lies in his statement. Some days after the incident, a video surfaces where it shows that the gun has not fired accidentally. Co-workers also chipped money to get a tv-persona lawyer.
Now, would I be “unfair” if I said fuck cops based on that fact?
Based on the fact alone that his co-workers KNEW what happened and still decided to protect him and after ~8 years he has not yet served jail time, would I still be “unfair” to call him and his co-workers, (where NO ONE EVER STOOD UP because, well, fuck everyone outside the “force”) a bunch of uncivilized pigs (because usual pigs are way more civilized to their community)?
Based on the fact that recently he said that he does not regret a thing, what stance would you have?
Now, would I be “unfair” if I said fuck cops based on that fact?
Yes.
Even if it’s 100% true as you said, maybe fuck those cops is justified. Fuck all cops absolutely isn’t. Stereotyping isn’t right no matter who you do it to. Going that way makes you no longer a principled objector to injustice, but a promotor of more tribal conflict. No thanks, we have quite enough of that already.
Fuck all cops absolutely isn’t. Stereotyping isn’t right no matter who you do it to.
If you followed what I told though, you’d seen that even then, none took a position against them. So in this context, yes, fuck all of them is very appropriate.
I don’t know in what part of the world you live, but in many cases, police officers act like they own everything with higher officials backing them up.
Different experiences yield different point of view. If you had seen seen the equivalent of a police squad beating the shit out of 70-90 year old people while they protest for their pension cuts, and NO ONE getting punished for this, you’d had the same view.
The above also applies to “tribal conflict” you mention. When you (not personally you :) ) fuck someone up completely, you have to consider Newton’s third law, which brilliantly applies to human nature in many cases: for every action, there is an equal and opposite reaction.
Alright, more plainly: go fly your ineffectual little banner where it won’t clutter up the place and set further precedent for flaccid, intellectually-light, worthless me-too-ism.
You don’t even differentiate between the different branches of law enforcement, the different units in a given department, the different counties and states and juridstictions. Nah, gee whiz, it’s just “fuck cops o'clock”.
Isn’t it a bit of a double standard to protest for “reasonable levels of discourse”, and follow it up with a vague what-about-ism? I made my perspective very clear in the comment above here, why send this angry and intellectually bankrupt response?
Check the timestamps–your more articulate post didn’t exist when I wrote that reply.
The fact remains: you’d be better off posting materials on how to take direct action against the oppressors than to waste space here by posting “mmmm yeah fuck cops” or “some source I haven’t linked articulates this rather abstract political argument about police”. The problem with both of those is that they are divorced from reality, either because they aren’t actionable (unless you are specifically suggesting intercourse with law enforcement) or because they are too abstract (a critique on how cops further the interest of the ruling class, which is both obvious and useless if you aren’t in the ruling class).
I’d frankly prefer seeing people linking to relevantmaterial and owning that, instead of hiding behind lame outbursts or navel-gazy philosophy–or, perhaps, if it isn’t so important that you want to oppose it with violence, quit bitching.
The middle ground–of both failing to oppose the supposed oppressors and failing to quietly endure them–just leads to noise in otherwise quiet and polite communities.
Check the timestamps–your more articulate post didn’t exist when I wrote that reply.
I’m sorry if I came across as trying to be misleading but I honestly meant this reply; not the comment that followed it.
you’d be better off posting materials on how to take direct action against the oppressors than to waste space here by posting “mmmm yeah fuck cops” or “some source I haven’t linked articulates this rather abstract political argument about police”
The issue is that praxis (regardless of the political camp you’re in), must be informed by your beliefs and understandings. I can’t just say “we should do such and such things,” without informing those actions with some sorts of understandings of the systems and situations I’m proposing to act upon.
Not to mention, I’m spending the time I can to write out honest and straightforward responses, but it won’t always be sufficient, and I’d also prefer not to just deflect discussion by stacking the decks with lengthy reads! However, a good introductory read on the relations I touched upon in the linked comments from above would be Wage Labor and Capital; and although I personally embrace a variety of strategies for making the future brighter, I personally agree most with Gilles Dauvé’s writings.
making wireless networks not suck kinda depends on managing airtime properly – this is why wifi sucks and LTE has thousands of pages of specs about a whole menagerie of control channels with heinous acronyms (that even the spec writers sometimes typo) that allocate who transmits what and when (and at what frequency).
Given what you’ve said, it’s surprising LTE works in practice, because I’d expect implementations to be buggy and screw everything up if the standard is hard to follow. Or are they abnormally well-tested in practice or something? :)
The standard is hard to follow only in that there’s plenty of moving parts and many different control channels because shared resources – such as airtime and radio bandwidth (and backhaul/network bandwidth) need to be allocated and shared precisely among many different devices.
If you want to avoid getting buggy implementations you can make formal models for all the layers – the RF layers, the bits that get modulated onto RF, and the protocols that get run over those bits. Formal models let you write software that can output valid protocol transcripts (that you can send to a radio) or validate a transcript (that you received from a radio) – all to make sure that every device is sending/receiving the right bits modulated the right way at the right times/frequencies.
Once you somehow obtain software that implements a formal protocol model, you (or someone who makes or certifies LTE equipment) can verify/fuzz code that runs on UEs and eNBs – both when it’s just harmless source code and also (if you have some SDRs) when it’s transmitting real RF in real devices in some test lab. So yes, implementations are indeed well-tested (and indeed, are required to be tested before they can be sold and be allowed on real LTE networks)
Neat project. I posted the same idea years ago on Schneier’s blog for subversion concerns. Idea was that physical possession of one’s hardware usually leads to compromise. Most stuff on a computer doesn’t have to be trusted either. High-assurance security also teaches to make trusted part as tiny & reusable as possible. I think I was screwing around with ARTIGO’s, smartcards or old-school game cartridges when I got the idea of a PC Card or ARTIGO-like module featuring secure processor, RAM, and I/O mediation. This would plug into desktops, servers, monitors, laptops if low-power, and so on. Ecosystem could show up like with iPod accessories probably starting in China where it’s cheaper. Later, I noted a private company came up with something similar and probably had patents on it so I backed off temporarily. Can’t recall its name but brochure had one or two in there that looked just like my marketing.
Projects like yours are a nice testing ground for this concept that I keep shelved but not forgotten. Interesting to see which decisions will work in market and which won’t. Important before an expensive, security-oriented product is attempted. The project is nice except for one, little problem it shares with others: the ARM processor. ARM Inc is about as opposite of protecting freedom as they can be. MIPS isn’t much better. Both have sued open-source and startup competition for patent infringement. “Open” POWER is an unknown. RISC-V isn’t on market yet. The only FOSS ISA that’s production grade right now is Cobham Gaisler’s Leon3 SPARC CPU’s. They’re GPL’d, have fabbed parts at who knows what price, SPARC ISA is open, Open Firmware exists, & products only need sub-$100 fee for trademark.
Note: OpenSparc T1 and T2 processors were GPL’d, too. FOSS workstations, servers and embedded should be all over these in terms of getting them fabbed and in real systems. They stay ignored for x86 and ARM mainly even if not performance-critical.
totally cool man. i’m familiar with gaisler research stuff, i looked at it years ago. ooooOoooo nice, nice, nice: LEON4 goes up to 1.7ghz in 32nm, is 64-bit and is SMP ready right now. niiiiice. oo, that’s really exciting. and there’s a simplified developer board that runs at 150mhz (good enough for testing, i bet it’s like 180nm or something)
having found the GPLGPU and the MIAOU project i think we have enough to put something together that would kick ass.
awww darnit, LEON4 is still only 32-bit. aw poop :)
the crucial bit is the SMP support, to be able to combine those…. opensparc… oracle… we’re not a huuge fan of oracle.. hmm interesting: just underneath the popup that i refuse to click which prevents and prohibits access to their web site, i can just about make out that the opensparc engine is GPLv2…. mostly. haha i bet they were expecting that to be a roadblock to prevent commercial SoCs being made around it…. :)
Probably haha. Yeah, I avoid Oracle wherever possible too. Just that these are supposedly GPL v2. Either a last resort for OSS CPU or a top contender if you need one with performance. A T2 on 28nm probably be no joke.
Are there any plans to make EOMA68 cards with a lot more than 2GB of RAM? I like the EOMA68 idea but 2GB of RAM is painfully, painfully small for the sorts of things I do (like “have lots of tabs open in chromium” or “compile stuff with ghc”) – it’s mostly tolerable on a recentish i5 laptop with 8GB of memory but I cannot find the masochism within me to buy a computer with 2GB of memory and use it like I use my laptop.
I would utterly love a non-horribly-expensive AArch64 machine with a comfy amount of memory (like, say, 8 or 16GB) and some SATA/SAS ports – if you can make that happen or if I can help make that happen I am willing to contribute my time and money.
I really do want some decent aarch64 hardware that isn’t violently expensive and that i wouldn’t mind using as my primary machine, but the situation is…frankly bleak.
hiya zkms, yes there are… but the SoC fabless semi companies have to actually come up with the goods… or we simply have to raise between $5m and $10m and get one made. i replied on that post for you, to explain a bit about what’s involved. 2GB RAM is the max you’ll ever likely see on memory-mapped 32-bit processors because they can only address up to 4GB RAM as it is!
we’ll get there. the project’s got another 10 years ahead of it at least.
nods – it’s weird that there aren’t any available SoCs that use aarch64 (or 32 bit SoCs that support LPAE) and expose enough address lines to connect a reasonable amount of RAM, tbh
I hope these questions aren’t too basic, but I’m not familiar with small ARM computers like this and I couldn’t find the info on the Crowdsupply page or updates:
1) When you say Linux 3.4 is supported, does that mean just 3.4 or 3.4 and all later versions? I saw in one update you mentioned 4.7 (I think) working but crashing frequently… What does future support likely look like: i.e. is everything getting into the mainline kernels and do you expect future versions to work even better, or should we expect to stay on 3.4 forever?
2) How close is the environment to “stock” distributions? I.e. when you say it has “Debian” on it, does that really mean it’s using totally standard Debian packages, tracking the official repositories, getting all the security updates from the Debian Security team, etc? Or is it more of a custom Debian-based environment tweaked for this hardware specifically? If the latter, how much does it differ from base Debian and is there anyone actively maintaining/updating it for the foreseeable future?
3) What does the installation/update procedure look like; is it as simple as on an x86 desktop where I’d just grab a bootable USB installer?
(1) no it’s precisely and specifically 3.4.104+ version which you can find is maintained by the sunxi community. this kernel has support for dual-screens, stable NAND flash (albeit odd and quirky), accelerated 2D GPU provision, hardware-accelerated 1080p60 video playback/encode provision and much more. it’s a stable continuation of what allwinner released. i’m currently bisecting git tags on linux mainline, so far i have: v3.4 works v3.15 works v4.0 works v4.2 lots of segfaults v4.4 failed v4.7 failed. so it’s a work-in-progress to find at least one mainline stable kernel.
(2) yes completely “normal” - exception being the kernel - there’s a huge active community behind the A20 but i will not be “holding anybody’s hand” - you’ll have to take responsibilty amongst yourselves as i am working on delivering hardware to people and, as i’m only one person, i simply don’t have time. i’m anticipating that people will help each other out on the mailing list.
(3) sigh the standard process should be to have an initrd installer (debian-installer netboot) but that’s actually too complex for developers to cope with, so instead what they do is create “pre-built” images. i REALLY don’t like this practice but for convenience i’m “going with the flow” for now.
Talking about “overdiagnosis” in a vacuum without talking about the devastating costs of underdiagnosis is horrible practice, (especially given how much ADHD is underdiagnosed in women). Please read the “The Impact of ADHD During Adulthood” section in that article. I’ll wait for you.
Self-medication with effective medication isn’t even really possible because amphetamines aren’t over-the-counter, so diagnosis is kinda a necessary condition to get access to effective meds.
The whole “overdiagnosis” meme leads to parents not being OK with the idea of their children being diagnosed with ADHD or *shudder* be on effective medication like adderall or ritalin, or people thinking/internalizing the idea that they can’t ~*~really~*~ have ADHD and that they’re just “lazy” or “flaky” or “apathetic” or whatever other shitty terms are used for people with executive dysfunction.
It also makes doctors more suspicious of people seeking treatment (after all, if it’s overdiagnosed it must be rare, so some of the people I see as a doctor have to be be faking it!), which actually causes harm to people who need access to medical care (finding a doctor who’s ok prescribing can fucking suck when you’re ADHD and have run out of meds).
I’m super critical of psychiatry, and there definitely are people who are diagnosed with it who don’t have it / don’t benefit from medication, but the “overdiagnosis” moral panic causes real harm to people who actually end up needing medical access (often to controlled substances which are impossible to legally get without the appropriate diagnosis).
the most spiteful thing about this isn’t just that it destroyed usability (especially if you need to use a keyboard and can’t use a mouse, or use a screen reader) and makes websites unusable and slow – it’s that there are many applications where javascript in web pages is actually useful.
for example, checking the checksum on a credit card number (so you don’t need to submit a typo’d credit card number and have to reenter everything on the page) (also you literally afaict never need to click on the “this is a visa” radio button, the card type can be identified with the number)
also it was cool when twitter let you look at threaded tweets on a timeline/userpage without having to change URL or open up a horrid cringey lightbox, it was fast and useful and wasn’t too onerous, unlike the new lightboxes.
i hope whoever invented those lightboxes for tweets has to use twitter on a computer that doesn’t have the latest CPU and doesn’t have 24 GB of memory and isn’t connected with a gigabit link to twitter.com. it is sufficient punishment.
The purpose elucidated by their letter justifying the action (linked from the above, here for convenience) was that it allowed the phone’s owners (not the shooters - it was a work phone) to access the existing backups. The letter does not raise the possibility of a subpoena to Apple.
That letter makes it sound like they reset the iCloud password to get at the backups without judicial oversight. I suppose if the employer / owner of the AppleId email was cooperating, that’s not too bad? Still, an interesting precedent.
Yeah, they did nothing wrong in asking to have it reset, but it leaves them in the position of having to argue that they’re incompetent, because the alternative is that they’re completely disingenuous (which is not actually proven as a matter of law, after all).
I know they should be using actual PGP signatures or whatever instead of just hashes; but:
Choosing to use MD5 at all in 2016 is a sign of negligence and incompetence when it comes to crypto.
There’s no excuse for choosing to use MD5 or SHA1 for anything today.
There’s no excuse for coming up with probably-incorrect handwaving arguments about how “MD5 is broken, but not broken in how we use it” or “we also provide sha1 hashes” instead of replacing it with an actually secure hash (there are faster and more secure hashes out there like blake2b).
Choosing to use MD5 or SHA1 (or other such hallmarks of bad 90s-era civilian cryptography) is as cringeworthy and negligent engineering as the safety engineering in 1960s American cars that ended with “padded dashboards” and 2-point seatbelts and “recessed hub steering wheels”.
“Disseminate TAI (along with the current leap offset) and implement the leap seconds at the point of use” might be a “programmers’ falsehood” but it’s also what 3 out of 4 GNSS systems actually do, so it has something going for it.
The fact that UTC (and not TAI-like timescales) is blessed for dissemination and traceability is downstream of ITU’s request that only UTC should be broadcast; not because of any technical difficulty with disseminating/processing traceable TAI-like timescales:
The time that comes out of an atomic clock looks like TAI, adding the leap seconds is something that has to come afterwards, and a system designer gets to choose when. Leap seconds don’t factor into calculations involving the precise timing of moving objects, whether the objects are airplanes, the angles/phases of generators in a power grid, particles in an accelerator, RF signals, etc. Unless you’re OK with grievously anomalous results whenever there’s a leap second, you want a timescale that looks like looks like TAI, not one that looks like UTC. Why bake the leap seconds into your timescale early on if you’re going to need to unbake them every time you calculate with times?
The designers of GPS, BeiDou, and Galileo all wisely flout this ITU recommendation: their constellations broadcast a TAI-like timescale (alongside the current leap second offset). The designers of PTP also flout this recommendation – by default, the timescale for PTP packets is TAI (usually derived from a GNSS receiver), not UTC. Should you want to reconstitute a UTC time, there is a
currentUtcOffset
field in PTP timestamps, whose value you can add to a TAI time.This “disseminate UTC only” ITU recommendation has been at fundamental odds with real-world PNT systems ever since GPS was known as “NAVSTAR GPS”.
Except GLONASS does disseminate UTC, including leap seconds.
If you want your time signal to be broadly useful for celestial navigation, UTC is the way to go as it’s (for now) guaranteed to be within 1s of UT1. I believe that’s where the ITU’s recommendation comes from. That said, it’s probably time for this usage application to take a step back compared to the broader issues caused by leap seconds.
You aren’t really arguing against what I said, because my “falsehood” did not talk about disseminating TAI along with the UTC offset (that isn’t “just” TAI). Those paragraphs were really an introduction to the next section where I explain that systems can’t avoid working with UTC. And GPS and PTP do not avoid working with UTC: as you said, they tackle its awkwardness head-on.
The way GPS handles UTC is even more complicated, though. Take a look at the GPS interface specification, in particular IS-GPS-200 section 20.3.3.5.2.4 (sic!) where it specifies that UTC is not “just” GPS time plus the leap second offset: there are also parameters A_0 and A_1 that describe a more fine-grained rate and phase adjustment. Section 3.3.4 says more about how GPS time relates to UTC:
This is (basically) related to the fact that atomic clocks need to be adjusted to tick at the same rate as UTC owing to various effects, including special and general relativity, e.g. NIST and the GPS operations centre are a mile high in Colorado, so their clocks tick at a different rate to the USNO in Washington DC, and a different rate from the clocks in the satellites whizzing around in orbit.
And you are right that UTC is a spectacularly poor design for a reference timescale, hence the effort to get rid of leap seconds.
my website’s https://superbaud.org/ and i’m curious what people have to say and/or offer as suggestions since i’m new to doing this sort of thing
Nice simple and clean. Maybe “Home” is a bit empty. I would make either “About” or “Blog” the landing page instead.
Along these lines, see David Reed’s memories of UDP “design”, where he notes that he and Steven Kent argued unsuccessfully for mandatory end-to-end encryption at the protocol layer circa 1977.
also the “bubba” and “skeeter” TCP options; a 1991-era proposal for opportunistic encryption of TCP connections (https://simson.net/thesis/pki2.pdf, https://www.ietf.org/mail-archive/web/tcpm/current/msg05424.html, http://mailman.postel.org/pipermail/internet-history/2001-November/000073.html)
Ahhh nice, it actually addresses the UEFI “runtime services” (does it talk about all the ACPI horrors that Linux runs in the ACPI bytecode interpreter? Idk) as well as SMM/ME. Frankly, this level of complexity in platform firmware is terrifying, how can anyone have a hope of building a secure/trustworthy system with x86 if all this is in play?
The fundamental problem with USB-C is also seemingly its selling point: USB-C is a connector shape, not a bus. It’s impossible to communicate that intelligibly to the average consumer, so now people are expecting external GPUs (which run on Intel’s Thunderbolt bus) for their Nintendo Switch (which supports only USB 3 and DisplayPort external busses) because hey, the Switch has USB-C and the eGPU connects with USB-C, so it must work, right? And hey why can I charge with this port but not that port, they’re “exactly the same”?
This “one connector to rule them all, with opaque and hard to explain incompatibilities hidden behind them” movement seems like a very foolish consistency.
It’s not even a particularly good connector. This is anecdotal, of course, but I have been using USB Type-A connectors since around the year 2000. In that time not a single connector has physically failed for me. In the year that I’ve had a device with Type-C ports (current Macbook Pro), both ports have become loose enough that simply bumping the cable will cause the charging state to flap. The Type-A connector may only connect in one orientation but damn if it isn’t resilient.
Might be crappy hardware. My phone and Thinkpad have been holding up just fine. The USB C seems a lot more robust than the micro b.
It is much better, but it’s still quite delicate with the “tongue” in the device port and all. It’s also very easy to bend the metal sheeting around the USB-C plug by stepping on it etc.
The perfect connector has already been invented, and it’s the 3.5mm audio jack. It is:
Every time someone announces a new connector and it’s not a cylindrical plug, I give up a little more on ever seeing a new connector introduced that’s not a fragile and/or obnoxious piece of crap.
Audio jacks are horrible from a durability perspective. I have had many plugs become bent and jacks damaged over the years, resulting in crossover or nothing playing at all. I have never had USB cable fail on me because I stood up with it plugged in.
Not been my experience. I’ve never had either USB-A or 3.5mm audio fail. (Even if they are in practice fragile, it’s totally possible to reinforce the connection basically as much as you want, which is not true of micro USB or USB-C.) Micro USB, on the other hand, is quite fragile, and USB-C perpetuates its most fragile feature (the contact-loaded “tongue”—also, both of them unforgivably put the fragile feature on the device—i.e., expensive—side of the connection).
You can’t feasibly fit enough pins for high-bandwidth data into a TR(RRRR…)S plug.
You could potentially go optical with a cylindrical plug, I suppose.
Until the cable breaks because it gets squished in your bag.
3.mm connectors are not durable and are absolutely unfit for any sort of high-speed data.
They easily get bent and any sort of imperfection translates to small interruptions in the connection when the connector turns. If I – after my hearing’s been demolished by recurring ear infections, loud eurobeat, and gunshots – can notice those tiny interruptions while listening to music, a multigigabit SerDes PHY absolutely will too.
This. USB-A is the only type of usb connector that never failed for me. All B types (Normal, Mini, Micro) and now C failed for me in some situation (breaking off, getting wobbly, loose connections, etc.)
That said, Apple displays their iPhones in Apple Stores solely resting on their plug. That alone speaks for some sort of good reliability design on their ports. Plus the holes in devices don’t need some sort of “tongue” that might break off at some point - the Lightning plug itself doesn’t have any intricate holes or similar and is made (mostly) of a solid piece of metal.
As much as I despise Apple, I really love the feeling and robustness of the Lightning plug.
I’m having the same problem, the slightest bump will just get it off of charging mode. I’ve been listening to music a lot recently and it gets really annoying.
Have you tried to clean the port you are using for charging?
I have noticed that Type C seems to suffer a lot more from lint in the ports than type A
That’s an optimistic view of things. It’s not just “average consumer[s]” who’ll be affected by this; there will almost certainly be security issues originating from the Alternate Mode thing – because different protocols (like thunderbolt / displayport / PCIe / USB 3) have extremely different semantics and attack surfaces.
It’s an understandable thing to do, given how “every data link standard converges to serial point-to-point links connected in a tiered-star topology and transporting packets”, and there’s indeed lots in common between all these standards and their PHYs and cable preferences; but melding them all into one connector is a bit dangerous.
I don’t want a USB device of unknown provenance to be able to talk with my GPU and I certainly don’t want it to even think of speaking PCIe to me! It speaking USB is frankly, scary enough. What if it lies about its PCIe Requester ID and my PCIe switch is fooled? How scary and uncouth!
Another complication is making every port do everything is expensive, so you end up with fewer ports total. Thunderbolt in particular. Laptops with 4 USB A, hdmi, DisplayPort, Ethernet, and power are easy to find. I doubt you’ll ever see a laptop with 8 full featured usb c ports.
I wonder if Lenovo will also go retro on the spyware. Maybe Reader Rabbit or WeatherBug.
I’d chip in a few satoshis to see a modern day BonsaiBuddy.
Well, they probably ship with Windows, and that includes Cortana
With HDR and lens flare.
I simply insist on it being “Back Orifice”.
There’s a better way of solving the certificate revocation problem: http://www.ccs.neu.edu/home/cbw/static/pdf/larisch-oakland17.pdf
I wonder what’s inside – is it spinning rust hard drives or is it flash?
God, it’s depressing seeing something I wrote in there and realising how little I’ve been able to get done since then :\
NIHing everything but nevertheless making mistakes that had been understood and fixed decades ago – this is why many people have negative attitudes towards systemd. I’ve been using it for many years but still the insistence on reimplementing things in shitty and incomplete ways (journald is a shoddy half-baked reimplementation of sqlite, say) frustrates me to no end.
15K RPM drives (and probably laptop hard drives) are getting killed off ruthlessly by flash. Nobody will mourn 15K drives but I confess that the end of laptop hard drives will make me sad and nostalgic.
Spinning rust has a dim future outside the data centre, but when $/GB matters, it reigns supreme, and that’s why in 2017 the state of the art still involves literal tubs of hard drives: https://code.facebook.com/posts/1869788206569924/introducing-bryce-canyon-our-next-generation-storage-platform/
Google presented a paper at FAST16 about the possibility of fundamentally redesigning hard drives to specifically target hard drives that are exclusively operated as part of a very large collection of disks (where individual errors are not such a big deal as in other applications) – in order to even further reduce $/GB and increase IOPS: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/44830.pdf .
Possible changes mentioned in the paper include:
New (non-compatible) physical form factor[s] in order to freely change the dimensions of the heads and platters
adding another actuator arm / voice coil with its own set of heads
accepting higher error rates and “flexible” (this is a euphemism for “degrades over time”) capacity in exchange for higher areal density, lower cost, and better latencies
exposing more lower-level details of the spinning rust to the host, such as host-managed retries and exposing APIs that let the host control when the drive schedules its internal management tasks
better profiling data (time spent seeking, time spent waiting for the disk to spin, time spent reading and processing data) for reads/writes
Caching improvements, such as ability to mark data as not to be cached (for streaming reads) or using PCIe to use the host’s memory for more cache
Read-ahead or read-behind once the head is settled costs nothing (there’s no seek involved!). If the host could annotate its read commands with its optional desires for nearby blocks, the hard drive could do some free read-ahead (if it was possible without delaying other queued commands).
better management of queuing – there’s a lot more detail on page 15 of that PDF about queuing/prioritisation/reordering, including the need for the drive’s command scheduler to be hard real-time and be aware of the current positioning of the heads and of the media. Fun stuff! I sorta wish I could be involved in making this sort of thing happen.
tl;dr there is a lot of room for improvement if you’re willing to throw tradition to the wind and focus on the single application (very large scale bulk storage) where spinning rust won’t get killed off by flash in a decade.
http://www.ewh.ieee.org/r6/scv/mag/MtgSum/Meeting2010_10_Presentation.pdf is a fun set of technically-focused slides about future reading/writing methodologies and is very much worth a read. Also TDMR is literally just MIMO for hard drives.
Is it literally rust in some sense, or is that a joke? The platters I’ve seen don’t look like rust, although they don’t look like anything else from the everyday world, either.
The magnetic layer (the part of the sandwich of platter materials/coatings that actually stores the data) used to indeed be iron oxide up to the 1980s but anything after that doesn’t have iron oxides – just a fancy cobalt alloy.
Nowadays it’s just a facetious term (unless you own vintage hard drives, in which case you actually are spinning literal rust).
AFAIK there is still a cost/density argument for spinning disks. That will go away as transistor sizes get smaller, but afaik we’re approaching the limit for how physically small transistors can be, so it may never truly go away.
There’s hardcore lithography involved in making the read/write heads of hard drives (http://life.lithoguru.com/?p=249) but making the platters don’t involve doing actual lithography (which helps hard drives stay cost-effective). Modern hard drives have bit cells a little larger than those of flash (it took until 2015 for flash bit cells to get smaller than those on spinning rust! http://www.digitalpreservation.gov/meetings/documents/storage14/Fontana_Volumetric%20Density%20Trends%20for%20Storage%20Components%20--%20LOC%2009222014.pdf).
With flash, AFAIK, shrinking things is hard because the number of electrons you can store decreases with size which, combined with the fact that electrons leak out over time, makes the electron-counting business even hairier.
Besides the “this is the browser’s job to display the top-level domain differently” or whatever comments, how difficult would it be for Let’s Encrypt to be given a list of commonly-phished websites and delay issuance (and notify the real paypal so they can take legal action or quash the domain registration)?
It’s a one line perl script to see if a domain name matches a ban list, but who decides what’s in the ban list? (Who decides who decides what’s in the ban list?)
LE already uses the google safe browsing list, so using the inverse of it (the legitimate websites that are imitated by the entries on the safe browsing list) isn’t really superbly controversial.
Creating an audit log of all certificates that have been delayed/quashed due to this procedure (along with the legal entity responsible) seems also completely doable.
Thanks to certificate transparency, that is unnecessary. Paypal can watch the CT log and “take legal action or quash the domain registration” whenever they feel like it :)
While I actually agree that being a cop in the current social system is unethical (especially in USA) and should be shunned, deflating that just to two words on a technically-minded community seems pointless.
Historically, the institution of police exists to protect property. Whether it be from rioting, exploited workers in the UK, or slave patrols in the US, modern police forces around the world all share these roots. Institutionally, a police force is the brutalizing arm of property’s enforcement, and only act to resolve general social conflict as a secondary function, which is why police forces tend to be filled with politically right wing individuals, with an atrocious tendency towards domestic violence, and often exact violence on minorities and mentally ill individuals, while failing to fulfill community needs in terms of domestic violence, gun violence, drug abuse, and sexual assault.
There are alternatives, but they require radically different community structuring than exist in western society. Sure, to you a “world without cops is unimaginable” makes sense, because the world (/society) that you live in couldn’t possibly govern itself, not in the state it’s in, socially and materially. However, that doesn’t make police a fact of nature, and the exploration of evolving community for self governance a waste of time.
I disagree with your argument in several respects.
The police don’t put any particular emphasis on enforcing property rights. The vast majority of police activity is just revenue-seeking via traffic law enforcement. After that are simple crimes like public intoxication and fighting. Most arrests don’t even lead to charges. “Civil asset forfeiture”, as it is euphemistically called, is literally just theft.
This is wrong. The police force works for a government, not the abstract notion of property. The law usually requires the government to protect property rights to some degree, but this is orthogonal to the fundamental role of the police.
What, exactly, do you think police should do more of in these instances? Drug abuse we can help via e.g. clean needle programs, but that’s not up to the police.
To be clear, I think the police system in its current form is pretty shit and could be improved in a lot of ways, but your post just seems like directionless communist idealism rather than a coherent critique or idea for improvement.
For drug use, police can help by not arresting (or searching) anyone for any drug-related crime (possession, purchase, sale, manufacture, transport, use, regardless of the drug type or quantity in question), with narrow exceptions like crimes of “dosing someone with drugs without their informed consent” or “driving while being impaired by drugs”.
It’s prohibition that’s the root cause of drug use being harmful – to both the people involved and also to the communities and society in which those people live.
I do think considerable police resources are put into defending the property of the wealthier part of society from the poorer part, and that this explains a good portion of the reason police forces exist and are well-funded. But, yeah, police are also not especially ideologically committed to enforcement of a philosophically grounded libertarian ideal of private property rights or anything. The rampant asset-forfeiture abuse you bring up is a good illustration of that, among others.
I think you could come up with an explanation for this situation that is more rather than less Marxist, though, relating to society being divided into classes, and the police being the hired muscle of one of its classes… i.e. they work for that class specifically, not for the abstract, theoretically equal-handed idea of private property. Although I’m pretty lefty, it’s also worth noting that there are libertarians spending quite a bit of time critiquing the current situation, as well. Folks like Randy Balko have been good in recent years on digging into how the police and the criminal-justice system fail to uphold the stated rights that people in lower socio-economic and minority racial positions are supposed to have.
Better late than never: the more hard-core libertarians, especially of the agorist variety, actually describe the society in class-divide terms. However, they draw the divide in a way that puts agents and workers of the government in the oppressors category, and the rest of the society in the oppressed.
Dear downvoter, have you read SEK3 yet?
Gave you back the karma that anon took bc nothing you said is itself wrong, but while I’m aware of market anarchism and have my own opinions on it (eg, the market cannot undo the contradictions inherent to the market), what sets SEK3’s agorism apart from Rothbardian market anarchism (which he appears to relate agorism to)?
SEK3 has described a full-fledged class system, taken further than Rothbard’s rulers vs ruled dichotomy. Moreover, SEK3 says salary job and corporations would not exist in Agora, but this I disagree with.
Such as?
The accumulation of capital causes over-accumulation (resulting in economic crises) and naturally tends to centralize capital1
EDIT: More clear wording and also David Harvey explains the contradictions of the market really well here
This analysis I’ve conveyed (it’s not my own), doesn’t rely specifically on individual actions of the police; instead, it’s presupposed on the material obligations and systemic relationship of a police force to the state and its people. However, in this analysis it also relies on underlying analysis wrt the state under the capitalist mode of production. That underlying analysis proposes that this state exists to defend the material interests of the ruling class, not out of conspiracy or individual actions, but out of necessity, self preservation. Under this analysis, the police are the domestic force of that state protecting the material interests of that ruling class (primarily property, but also given that commodities and capital are predicated on the ownership of property… PS: my reading on this tidbit could be wrong! I leave those better read on political economy to correct me here). This enforcement can make itself visible in a multitude of ways, including general criminalization that predominantly targets the non-ruling classes, as well as direct policies that directly protect material interests.
This specific critique asserts that structurally, police forces are unequipped to address those community problems. What tools do they have to improve the communities they occupy, besides criminalization?
There are idealists but I’m not among them ?
Cool hypothetical analysis, except that the actual evidence I pointed out strongly suggests that the police actually aren’t all that hot on property rights, as you claimed they were.
I don’t think any idealist ever claimed to be an idealist.
I’m sorry, I think I was unclear on this. I’m not discussing “property rights”, but “property as the material interests of the ruling class”. I don’t think I’d disagree with you that police aren’t so concerned about property rights (or human rights, in some absurd and obscene cases), especially given the bullshit that is civil forfeiture.
I think you missed the key word in the original post - “Historically”.
Think about the period where societies transitioned from not having a police force to having a police force. Who made that call? Why did they make it?
By my reading (feel free to dispute it), most police forces were initially formed by a ruling class because each maintaining private security for their assets was getting expensive.
Let’s try and keep a reasonable level of discourse here, shall we?
The only thing that makes these sorts of submissions even slightly bearable is if we manage to keep our comments useful.
“Fuck <x>” doesn’t really do that, now does it?
Depends on the perspective.
An example (without backstory of course, just a quick timeline): a cop shoots a child and lies in his statement. Some days after the incident, a video surfaces where it shows that the gun has not fired accidentally. Co-workers also chipped money to get a tv-persona lawyer.
Now, would I be “unfair” if I said fuck cops based on that fact?
Based on the fact alone that his co-workers KNEW what happened and still decided to protect him and after ~8 years he has not yet served jail time, would I still be “unfair” to call him and his co-workers, (where NO ONE EVER STOOD UP because, well, fuck everyone outside the “force”) a bunch of uncivilized pigs (because usual pigs are way more civilized to their community)?
Based on the fact that recently he said that he does not regret a thing, what stance would you have?
Yes.
Even if it’s 100% true as you said, maybe fuck those cops is justified. Fuck all cops absolutely isn’t. Stereotyping isn’t right no matter who you do it to. Going that way makes you no longer a principled objector to injustice, but a promotor of more tribal conflict. No thanks, we have quite enough of that already.
If you followed what I told though, you’d seen that even then, none took a position against them. So in this context, yes, fuck all of them is very appropriate.
I don’t know in what part of the world you live, but in many cases, police officers act like they own everything with higher officials backing them up.
Different experiences yield different point of view. If you had seen seen the equivalent of a police squad beating the shit out of 70-90 year old people while they protest for their pension cuts, and NO ONE getting punished for this, you’d had the same view.
The above also applies to “tribal conflict” you mention. When you (not personally you :) ) fuck someone up completely, you have to consider Newton’s third law, which brilliantly applies to human nature in many cases: for every action, there is an equal and opposite reaction.
I’d say that that’s a disturbing story, but without a link to the source it’s just hearsay and only slightly less tiresome than “fuck cops”.
I’d also say that for every handful of cops like that, there are hundreds who are quietly doing their jobs and making their communities better.
I’d also also say that none of that has a damned thing to do with the practice of technology and thus should be somewhere else.
You are correct. So, for reference: https://en.wikipedia.org/wiki/2008_Greek_riots
The conclusion to draw here is that these sorts of submissions are not even slightly bearable.
“Fuck cops.” is useful, as the banner of one’s unapologetic stance in the face of massive oppressive forces.
Fuck cops.
Alright, more plainly: go fly your ineffectual little banner where it won’t clutter up the place and set further precedent for flaccid, intellectually-light, worthless me-too-ism.
You don’t even differentiate between the different branches of law enforcement, the different units in a given department, the different counties and states and juridstictions. Nah, gee whiz, it’s just “fuck cops o'clock”.
Isn’t it a bit of a double standard to protest for “reasonable levels of discourse”, and follow it up with a vague what-about-ism? I made my perspective very clear in the comment above here, why send this angry and intellectually bankrupt response?
Check the timestamps–your more articulate post didn’t exist when I wrote that reply.
The fact remains: you’d be better off posting materials on how to take direct action against the oppressors than to waste space here by posting “mmmm yeah fuck cops” or “some source I haven’t linked articulates this rather abstract political argument about police”. The problem with both of those is that they are divorced from reality, either because they aren’t actionable (unless you are specifically suggesting intercourse with law enforcement) or because they are too abstract (a critique on how cops further the interest of the ruling class, which is both obvious and useless if you aren’t in the ruling class).
I’d frankly prefer seeing people linking to relevant material and owning that, instead of hiding behind lame outbursts or navel-gazy philosophy–or, perhaps, if it isn’t so important that you want to oppose it with violence, quit bitching.
The middle ground–of both failing to oppose the supposed oppressors and failing to quietly endure them–just leads to noise in otherwise quiet and polite communities.
I’m sorry if I came across as trying to be misleading but I honestly meant this reply; not the comment that followed it.
The issue is that praxis (regardless of the political camp you’re in), must be informed by your beliefs and understandings. I can’t just say “we should do such and such things,” without informing those actions with some sorts of understandings of the systems and situations I’m proposing to act upon.
Not to mention, I’m spending the time I can to write out honest and straightforward responses, but it won’t always be sufficient, and I’d also prefer not to just deflect discussion by stacking the decks with lengthy reads! However, a good introductory read on the relations I touched upon in the linked comments from above would be Wage Labor and Capital; and although I personally embrace a variety of strategies for making the future brighter, I personally agree most with Gilles Dauvé’s writings.
(EDIT: grammar)
Thanks for the links!
making wireless networks not suck kinda depends on managing airtime properly – this is why wifi sucks and LTE has thousands of pages of specs about a whole menagerie of control channels with heinous acronyms (that even the spec writers sometimes typo) that allocate who transmits what and when (and at what frequency).
Given what you’ve said, it’s surprising LTE works in practice, because I’d expect implementations to be buggy and screw everything up if the standard is hard to follow. Or are they abnormally well-tested in practice or something? :)
The standard is hard to follow only in that there’s plenty of moving parts and many different control channels because shared resources – such as airtime and radio bandwidth (and backhaul/network bandwidth) need to be allocated and shared precisely among many different devices.
If you want to avoid getting buggy implementations you can make formal models for all the layers – the RF layers, the bits that get modulated onto RF, and the protocols that get run over those bits. Formal models let you write software that can output valid protocol transcripts (that you can send to a radio) or validate a transcript (that you received from a radio) – all to make sure that every device is sending/receiving the right bits modulated the right way at the right times/frequencies.
Once you somehow obtain software that implements a formal protocol model, you (or someone who makes or certifies LTE equipment) can verify/fuzz code that runs on UEs and eNBs – both when it’s just harmless source code and also (if you have some SDRs) when it’s transmitting real RF in real devices in some test lab. So yes, implementations are indeed well-tested (and indeed, are required to be tested before they can be sold and be allowed on real LTE networks)
hello, thx C-Keen, i’m the creator of this project, thx for inviting me here. any questions from anyone feel free to ask.
Neat project. I posted the same idea years ago on Schneier’s blog for subversion concerns. Idea was that physical possession of one’s hardware usually leads to compromise. Most stuff on a computer doesn’t have to be trusted either. High-assurance security also teaches to make trusted part as tiny & reusable as possible. I think I was screwing around with ARTIGO’s, smartcards or old-school game cartridges when I got the idea of a PC Card or ARTIGO-like module featuring secure processor, RAM, and I/O mediation. This would plug into desktops, servers, monitors, laptops if low-power, and so on. Ecosystem could show up like with iPod accessories probably starting in China where it’s cheaper. Later, I noted a private company came up with something similar and probably had patents on it so I backed off temporarily. Can’t recall its name but brochure had one or two in there that looked just like my marketing.
Projects like yours are a nice testing ground for this concept that I keep shelved but not forgotten. Interesting to see which decisions will work in market and which won’t. Important before an expensive, security-oriented product is attempted. The project is nice except for one, little problem it shares with others: the ARM processor. ARM Inc is about as opposite of protecting freedom as they can be. MIPS isn’t much better. Both have sued open-source and startup competition for patent infringement. “Open” POWER is an unknown. RISC-V isn’t on market yet. The only FOSS ISA that’s production grade right now is Cobham Gaisler’s Leon3 SPARC CPU’s. They’re GPL’d, have fabbed parts at who knows what price, SPARC ISA is open, Open Firmware exists, & products only need sub-$100 fee for trademark.
http://www.gaisler.com/index.php/products/processors/leon3
Note: OpenSparc T1 and T2 processors were GPL’d, too. FOSS workstations, servers and embedded should be all over these in terms of getting them fabbed and in real systems. They stay ignored for x86 and ARM mainly even if not performance-critical.
totally cool man. i’m familiar with gaisler research stuff, i looked at it years ago. ooooOoooo nice, nice, nice: LEON4 goes up to 1.7ghz in 32nm, is 64-bit and is SMP ready right now. niiiiice. oo, that’s really exciting. and there’s a simplified developer board that runs at 150mhz (good enough for testing, i bet it’s like 180nm or something)
having found the GPLGPU and the MIAOU project i think we have enough to put something together that would kick ass.
awww darnit, LEON4 is still only 32-bit. aw poop :)
OpenSPARC T2 is 64-bit. Some smaller projects just knock some cores and stuff out of it to simplify it.
http://www.oracle.com/technetwork/systems/opensparc/opensparc-t2-page-1446157.html
Gaisler is still best for embedded and customizable. Wonder how hard it would be to make it 64-bit.
the crucial bit is the SMP support, to be able to combine those…. opensparc… oracle… we’re not a huuge fan of oracle.. hmm interesting: just underneath the popup that i refuse to click which prevents and prohibits access to their web site, i can just about make out that the opensparc engine is GPLv2…. mostly. haha i bet they were expecting that to be a roadblock to prevent commercial SoCs being made around it…. :)
Probably haha. Yeah, I avoid Oracle wherever possible too. Just that these are supposedly GPL v2. Either a last resort for OSS CPU or a top contender if you need one with performance. A T2 on 28nm probably be no joke.
Are there any plans to make EOMA68 cards with a lot more than 2GB of RAM? I like the EOMA68 idea but 2GB of RAM is painfully, painfully small for the sorts of things I do (like “have lots of tabs open in chromium” or “compile stuff with ghc”) – it’s mostly tolerable on a recentish i5 laptop with 8GB of memory but I cannot find the masochism within me to buy a computer with 2GB of memory and use it like I use my laptop.
I would utterly love a non-horribly-expensive AArch64 machine with a comfy amount of memory (like, say, 8 or 16GB) and some SATA/SAS ports – if you can make that happen or if I can help make that happen I am willing to contribute my time and money.
I really do want some decent aarch64 hardware that isn’t violently expensive and that i wouldn’t mind using as my primary machine, but the situation is…frankly bleak.
hiya zkms, yes there are… but the SoC fabless semi companies have to actually come up with the goods… or we simply have to raise between $5m and $10m and get one made. i replied on that post for you, to explain a bit about what’s involved. 2GB RAM is the max you’ll ever likely see on memory-mapped 32-bit processors because they can only address up to 4GB RAM as it is!
we’ll get there. the project’s got another 10 years ahead of it at least.
nods – it’s weird that there aren’t any available SoCs that use aarch64 (or 32 bit SoCs that support LPAE) and expose enough address lines to connect a reasonable amount of RAM, tbh
Very cool project!
I hope these questions aren’t too basic, but I’m not familiar with small ARM computers like this and I couldn’t find the info on the Crowdsupply page or updates:
1) When you say Linux 3.4 is supported, does that mean just 3.4 or 3.4 and all later versions? I saw in one update you mentioned 4.7 (I think) working but crashing frequently… What does future support likely look like: i.e. is everything getting into the mainline kernels and do you expect future versions to work even better, or should we expect to stay on 3.4 forever?
2) How close is the environment to “stock” distributions? I.e. when you say it has “Debian” on it, does that really mean it’s using totally standard Debian packages, tracking the official repositories, getting all the security updates from the Debian Security team, etc? Or is it more of a custom Debian-based environment tweaked for this hardware specifically? If the latter, how much does it differ from base Debian and is there anyone actively maintaining/updating it for the foreseeable future?
3) What does the installation/update procedure look like; is it as simple as on an x86 desktop where I’d just grab a bootable USB installer?
Thank you!
thx felix.
(1) no it’s precisely and specifically 3.4.104+ version which you can find is maintained by the sunxi community. this kernel has support for dual-screens, stable NAND flash (albeit odd and quirky), accelerated 2D GPU provision, hardware-accelerated 1080p60 video playback/encode provision and much more. it’s a stable continuation of what allwinner released. i’m currently bisecting git tags on linux mainline, so far i have: v3.4 works v3.15 works v4.0 works v4.2 lots of segfaults v4.4 failed v4.7 failed. so it’s a work-in-progress to find at least one mainline stable kernel.
(2) yes completely “normal” - exception being the kernel - there’s a huge active community behind the A20 but i will not be “holding anybody’s hand” - you’ll have to take responsibilty amongst yourselves as i am working on delivering hardware to people and, as i’m only one person, i simply don’t have time. i’m anticipating that people will help each other out on the mailing list.
(3) sigh the standard process should be to have an initrd installer (debian-installer netboot) but that’s actually too complex for developers to cope with, so instead what they do is create “pre-built” images. i REALLY don’t like this practice but for convenience i’m “going with the flow” for now.
feel free to ask more :)
Talking about “overdiagnosis” in a vacuum without talking about the devastating costs of underdiagnosis is horrible practice, (especially given how much ADHD is underdiagnosed in women). Please read the “The Impact of ADHD During Adulthood” section in that article. I’ll wait for you.
Self-medication with effective medication isn’t even really possible because amphetamines aren’t over-the-counter, so diagnosis is kinda a necessary condition to get access to effective meds.
The whole “overdiagnosis” meme leads to parents not being OK with the idea of their children being diagnosed with ADHD or *shudder* be on effective medication like adderall or ritalin, or people thinking/internalizing the idea that they can’t ~*~really~*~ have ADHD and that they’re just “lazy” or “flaky” or “apathetic” or whatever other shitty terms are used for people with executive dysfunction.
It also makes doctors more suspicious of people seeking treatment (after all, if it’s overdiagnosed it must be rare, so some of the people I see as a doctor have to be be faking it!), which actually causes harm to people who need access to medical care (finding a doctor who’s ok prescribing can fucking suck when you’re ADHD and have run out of meds).
I’m super critical of psychiatry, and there definitely are people who are diagnosed with it who don’t have it / don’t benefit from medication, but the “overdiagnosis” moral panic causes real harm to people who actually end up needing medical access (often to controlled substances which are impossible to legally get without the appropriate diagnosis).
the most spiteful thing about this isn’t just that it destroyed usability (especially if you need to use a keyboard and can’t use a mouse, or use a screen reader) and makes websites unusable and slow – it’s that there are many applications where javascript in web pages is actually useful.
for example, checking the checksum on a credit card number (so you don’t need to submit a typo’d credit card number and have to reenter everything on the page) (also you literally afaict never need to click on the “this is a visa” radio button, the card type can be identified with the number)
also it was cool when twitter let you look at threaded tweets on a timeline/userpage without having to change URL or open up a horrid cringey lightbox, it was fast and useful and wasn’t too onerous, unlike the new lightboxes.
i hope whoever invented those lightboxes for tweets has to use twitter on a computer that doesn’t have the latest CPU and doesn’t have 24 GB of memory and isn’t connected with a gigabit link to twitter.com. it is sufficient punishment.
Oh that’s why the Feeb did a “a reckless and forensically unsound password change” on the phone’s iCloud account? That’s what “professional” means?
What’s the point of the password change? Why can’t they just subpoena all the info on the servers?
The purpose elucidated by their letter justifying the action (linked from the above, here for convenience) was that it allowed the phone’s owners (not the shooters - it was a work phone) to access the existing backups. The letter does not raise the possibility of a subpoena to Apple.
That letter makes it sound like they reset the iCloud password to get at the backups without judicial oversight. I suppose if the employer / owner of the AppleId email was cooperating, that’s not too bad? Still, an interesting precedent.
Yeah, they did nothing wrong in asking to have it reset, but it leaves them in the position of having to argue that they’re incompetent, because the alternative is that they’re completely disingenuous (which is not actually proven as a matter of law, after all).
I know they should be using actual PGP signatures or whatever instead of just hashes; but:
Choosing to use MD5 at all in 2016 is a sign of negligence and incompetence when it comes to crypto.
There’s no excuse for choosing to use MD5 or SHA1 for anything today.
There’s no excuse for coming up with probably-incorrect handwaving arguments about how “MD5 is broken, but not broken in how we use it” or “we also provide sha1 hashes” instead of replacing it with an actually secure hash (there are faster and more secure hashes out there like blake2b).
Choosing to use MD5 or SHA1 (or other such hallmarks of bad 90s-era civilian cryptography) is as cringeworthy and negligent engineering as the safety engineering in 1960s American cars that ended with “padded dashboards” and 2-point seatbelts and “recessed hub steering wheels”.
If I’ve understood this correctly, in this case it wouldn’t really help? I mean would a regular linux mint user verify it after download?
My take is that the installer should verify itself, something akin to signify on openbsd but for the whole image.