Davdroid: Calender and Addressbook sync (selfhosted with davical) K-9 Mail: Email (selfhosted with dovecot) Kore: Kodi remote control MuPDF: Pdf reader Labcoat: Gitlab client
A major reason I use Debian is that, as a user, I consider 90% of software lifecycles to be utterly insane and actively hostile to me, and Debian forces them into some semblance of a reasonable, manageable, release pattern (namely, Debian’s). If I get the option to choose between upstream and a Debian package, I will take the latter every single time, because it immediately has a bunch of policy guarantees that make it friendlier to me as a user. And if I don’t get the option, I will avoid the software if I possibly can.
(Firefox is the only major exception, and its excessively fast release cadence and short support windows are by far my biggest issue with it as a piece of software.)
I never really understood why short release cycles is a problem for people, but then I don’t use Debian because of their too long ones. For example, the majority of Firefox’s releases don’t contain user-visible changes.
Could you elaborate what your problems with Firefox on Debian are? Or why software lifecycles can even be hostile to you?
I’m with you. I update my personal devices ~weekly via a rolling release model (going on 10 years now), and I virtually never run into problems. The policies employed by Debian stable provide literally no advantage to me because of that. Maybe the calculus changes in a production environment with more machines to manage, but as far as personal devices go, Debian stable’s policies would lead to a net drain on my time because I’d be constantly pushing against the grain to figure out how to update my software to the latest version provided by upstream.
I’ve had quite a few problems myself, mostly around language-specific package managers that break something under me. This is probably partly my fault because I have a lot of one-off scripts with unversioned dependencies, but at least in the languages I use most (Python, Perl, R, shell, etc.), those kinds of unversioned dependencies seem to be the norm. Most recent example: an update to R on my Mac somehow broke some of my data-visualization scripts while I was working on a paper (seemingly due to a change in ggplot, which was managed through R’s own package manager). Not very convenient timing.
For a desktop I mostly put up with that anyway, but for a server I prefer Debian stable because I can leave it unattended with auto-updates on, not having to worry that something is going to break. For example I have some old Perl CGI stuff lying around, and have been happy that if I manage dependencies via Debian stable’s libdevel-xxx-perl packages instead of CPAN, I can auto-update and pull in security updates without my scripts breaking. I also like major Postfix upgrades (which sometimes require manual intervention) to be scheduled rather than rolling.
Yeah I don’t deal with R myself, but based on what my wife tells me (she works with R a lot), I’m not at all surprised that it would be a headache to deal with!
Every time a major update happens to a piece of software, I need to spend a bunch of time figuring out and adapting to the changes. As a user, my goal is to use software, rather than learn how to use it, so that time is almost invariably wasted. If I can minimize the frequency, and ideally do all my major updates at the same time, that at least constrains the pain.
I’ve ranted about this in a more restricted context before.
My problem with Firefox on Debian is that due to sheer code volume and complexity, third-party security support is impossible; its upstream release and support windows are incompatible with Debian’s; and it’s too important to be dropped from the distro. Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.
Due to all that, it has an exception to the release lifecycle, and every now and then with little warning it will go through a major update, breaking everything and wasting a whole bunch of my time.
I had this happen with Chromium; they replaced the renderer in upstream, and a security flaw was found which couldn’t be backported due to how insanely complicated the codebase is and the fact that Chromium doesn’t have a proper stable branch, so one day I woke up and suddenly I couldn’t run Chromium over X forwarding any more, which was literally the only thing I was using it for.
Because you need to invest into upgrading too much of your time. I maintain 4 personal devices with Fedora and I almost manage to upgrade yearly. I am very happy for RHEL at work. 150 servers would be insane. Even with automation. Just the investment into decent ops is years.
For me there is an equivalence between Debian stable releases and Ubuntu LTE ones, they both run at around 2 years.
But the advantage (in my eyes) that Debian has is the rolling update process for the “testing” distribution, which gets a good balance between stability and movement.
We are currently switching our servers from Ubuntu LTE to Debian stable. Driven mostly by lack of confidence in the future trajectory of Ubuntu.
Eventually we will stop investing in chemical rocketry and do something really interesting in space travel. We need a paradigm shift in space travel and chemical rockets are a dead end.
I can’t see any non-scifi future in which we give up on chemical rocketry. Chemical rocketry is really the only means we have of putting anything from the Earth’s surface into Low Earth Orbit, because the absolute thrust to do that must be very high compared what you’re presumably alluding to (electric propulsion, lasers, sails) that only work once in space, where you can do useful propulsion orthogonally to the local gravity gradient (or just with weak gravity). But getting to LEO is still among the hardest bits of any space mission, and getting to LEO gets you halfwhere to anywhere in the universe, as Heinlein said.
Beyond trying reuse the first stage of a conventional rocket, as SpaceX are doing, there are some other very interesting chemical technologies that could greatly ease space access, such as the SABRE engine being developed for the Skylon spaceplane. The only other way I know of that’s not scifi (e.g. space elevators) are nuclear rockets, in which a working fluid (like Hydrogen) is heated by a fissiling core and accelerated out of a nozzle. The performance is much higher than chemical propulsion but the appetite to build and fly such machines is understandably very low, because of the risk of explosions on ascent or breakup on reentry spreading a great deal of radioactive material in the high atmosphere over a very large area.
But in summary, I don’t really agree with, or more charitably thing I’ve understood your point, and would be interested to hear what you actually meant.
I remember being wowed by Project Orion as a kid.
Maybe Sagan had a thing for it? The idea in that case was to re-use fissile material (after making it as “clean” as possible to detonate) for peaceful purposes instead of for military aggression.
Atomic pulse propulsion (ie Orion) can theoretically reach .1c, so that’s the nearest star in 40 years. If we can find a source of fissile material in solar system (that doesn’t have to be launched from earth) and refined, interstellar travel could really happen.
The moon is a candidate for fissile material: https://www.space.com/6904-uranium-moon.html
Problem with relying a private company funded by public money like SpaceX is that they won’t be risk takers, they will squeeze every last drop out of existing technology. We won’t know what reasonable alternatives could exist because we are not investing in researching them.
I don’t think it’s fair to say SpaceX won’t be risk takers, considering this is a company who has almost failed financially pursuing their visions, and has very ambitious goals for the next few years (which I should mention, require tech development/innovation and are risky).
Throwing money at research doesn’t magically create new tech, intelligent minds do. Most of our revolutionary advances in tech have been brainstormed without public nor private funding. One or more people have had a bright idea and pursed it. This isn’t something people can just do on command. It’s also important to also consider that people fail to bring their ideas to fruition but have paved the path for future development for others.
I would say that they will squeeze everything out of existing approaches, «existing technology» sounds a bit too narrow. And unfortunately, improving the technology by combining well-established approaches is the stage that cannot be too cheap because they do need to build and break fulll-scale vehicles.
I think that the alternative approaches for getting from inside atmosphere into orbit will include new things developed without any plans to use them in space.
What physical effects would be used?
I think that relying on some new physics, or contiguous objects of a few thousand kilometers in size above 1km from the ground are not just a paradigm shift; anything like that would be nice, but doesn’t make what there currently is a disappointment.
The problem is that we want to go from «immobile inside atmosphere» to «very fast above atmosphere». By continuity, this needs to pass either through «quite fast in the rareified upper atmosphere» or through «quite slow above the atmosphere».
I am not sure there is a currently known effect that would allow to hover above the atmosphere without orbital speed.
As for accelerating through the atmosphere — and I guess chemical air-breathing jet engines don’t count as a move away from chemical rockets — you either need to accelerate the gas around you, or need to carry reaction mass.
In the first case as you need to overcome the drag, you need some of the air you push back to fly back relative to Earth. So you need to accelerate some amount of gas to multiple kilometers per second; I am not sure there are any promising ideas for hypersonic propellers, especially for rareified atmosphere. I guess once you reach ionosphere, something large and electromagnetic could work, but there is a gap between the height where anything aerodynamic has flown (actually, a JAXA aerostat, maybe «aerodynamic» is a wrong term), and the height where ionisation starts rising. So it could be feasible or infeasible, and maybe a new idea would have to be developed first for some kind of in-atmosphere transportation.
And if you carry you reaction mass with you, you then need to eject it fast. Presumably, you would want to make it gaseous and heat up. And you want to have high throughput. I think that even if you assume you have a lot of electrical energy, splitting watter into hydrogen and oxygen, liquefying these, then burning them in-flight is actually pretty efficient. But then the vehicle itself will be a chemical rocket anyway, and will use the chemical rocket engineering as practiced today. Modern methods of isolating nuclear fission from the atmosphere via double heat exchange reduce throughput. Maybe some kind nuclear fusion with electomagnetic redirection of the heated plasma could work, maybe it could even be more efficient than running a reactor on the ground to split water, but nobody knows yet what is the scale required to run energy-positive nuclear fusion.
All in all, I agree there are directions that could maybe become a better idea for starting from Earth than chemical rockets, but I think there are many scenarios where the current development path of chemical rockets will be more efficient to reuse and continue.
What do you mean by “chemical rockets are a dead end”? In order to escape planetary orbits, there really aren’t many options. However, for intersteller travel, ion drives and solar sails have already been tested and deployed and they have strengths and weaknesses. So there are multiple use cases here depending on the option.
Thanks for this submission. My hunch is that an architecture which makes e.g. caching and speculative execution an observable part of the API is the better approach. Afaiu mips does something similar and compilers learned to deal with it.
My own hunch is that we should be avoiding impure operations like getting the current time.
This post seems to be talking about trusting high-assurance languages for critical/sensitive tasks, and how those guarantees can be undermined if we run arbitrary machine code. That problem seems too difficult to me: surely a better langsec approach would be for the arbitrary code to be in a high-assurance language, with the only machine code we execute coming from trusted critical/sensitive programs?
I would think a langsec approach to e.g. preventing timing attacks in Javascript is to make Javascript (the language) incapable of timing. Or, at least, providing a logical clock (also used for the interleaving of concurrent event handlers) rather than allowing access to the actual time.
For the vast majority of uses of a computer at some point the application will need to know what time it is. Avoiding impure operations is throwing up your hands on general computing as a useful tool. I don’t think this is quite what you meant to say though. Can you clarify?
The clock is a sensor and needs to be treated as such with permissions and similar. Many applications don’t have a need for the clock.
I would also add: be ready to be questioned back with “What is it that you’re trying to do” when presenting an expert with a possible solution. Chances are you’re trying to solve a problem in a very roundabout way, and there is a more idiomatic, better way to solve your problem. So don’t be shy in describing your end goal.
Don’t forget to describe your constraints along with your goal. Otherwise you might get very unhelpful answers.
Exceptional according to your classification, but this is quite standard in these parts of the world. So it is not really exceptional from my point of view. We are expected to work 37.5 hours a week and we have core hours from 9 am to 3 am. Working remote is frequently possible as well.
>While Kodi is undoubtedly the most popular media player software in the world right now
That’s news to me, I’ve never even heard of Kodi. Are we sure it isn’t VLC or iTunes or Windows Media Player? Where are usage stats to back this up?
I imagine that Kodi’s user population is difficult to measure. I have seen Android HDMI sticks that ship with a custom Kodi build enabling users to browse various streaming media repositories. It’s a fascinating ecosystem that seems to be largely invisible to people in the American Netflix/Hulu world.
It’s largely due to the American world of streaming services with reasonably large back catalogues and favourable geographic restrictions.
The less salubrious parts of the British media have been on a bit of a crusade against what they have dubbed “Fully Loaded Kodi Boxes”, FireTV clones or equivalent android boxes, with Kodi preloaded and a bunch of unofficial addon repositories enabled.
The unofficial addons of course allow the (generally less tech-savvy) user to view pirated streams of TV shows, and live channels, usually with somebody else’s advertisements superimposed or injected in the regular commercial breaks.
The usual suspects are obviously unhappy about this, as the only thing they hate more than internet pirates watching content they haven’t had their pound of flesh for, is the general public watching content they haven’t had their pound of flesh for!
Thus, you have fantastic, sponsored (allegedly) content from the likes of The Daily Mail, asserting that “Kodi Boxes” will literally kill your family [1].
The author doesn’t touch enough on just how much social damage can come from this feature. Permanent ostracism, shunning, blackmail, lower pay, and so on. People love judging and hating on other people when they can point fingers while hiding their own malfeasance. Certain things will show up while others won’t in a model where we try to blockchain everything. The idea that they’ll evolve some form of forgiveness instead runs contrary to how human nature has played out so far.
Now, one might argue the benefits outweigh the harm. That would be a different discussion even though the article does list some of them. It gives a start on it.
I haven’t yet finished reading the post, but what strikes me is that the points of the book “Debt: The first 5000 years” are quite relevant. There has always been some form of forgiveness between closely knit social circles. But this breaks down at scale and we need systems that are resilient against free riders, while still allowing forgiveness.
One thing that has made me skeptical about block chains as a big source of change is how often hard forking comes up and/or happens. ETH has been hard forked to deal with a bug. Bugs aren’t going away any time soon. So is this decentralized blockchain actually valuable if you’re still going to have people doing hard forks when something happens they don’t like? I just don’t really see humans wanting to be in an artificial prison when it harms them.
Sounds a lot like democracy. When everyone votes the way I do, it’s obviously correct. When I’m in the minority, it’s time to change the rules.
Nice comparison but maybe not quite it. The democracy’s design usually does what it’s supposed to so long as it’s voters keep the politicians following the established rules. They’ll then do what they voted for. All the details of activities stay within the parameters of the system. The system itself can only be modified (Constitutional Amendment) with a movement or vote so huge that malicious subset will have a hard time achieving that [1]. The thing with these blockchain forks is they’re more like a majority casually Amending the Constitution or overthrowing it on a whim when their reps screwed up usually after a big investment into their reps’ scheme. It just seems more risky than existing democracies whose overall scheme stays largely intact as they fix problems. Blockchain forks in finance are more like eminent domain or civil forfeiture where they can just straight up take people’s shit.
[1] Unless it’s a crooked, capitalist system where a tiny few can control all the money and media. That could amplify the few.
It’s worth pointing out that ETH wasn’t even forked because of a bug in the normal sense. AFAIK there was nothing wrong with the system itself, it was just a poorly-written contract. So it wasn’t forked because something was fundamentally wrong, it was forked because a bunch of people lost money on a bad speculation and got pissed.
it was just a poorly-written contract. So it wasn’t forked because something was fundamentally wrong
Sort of. Most people participating didn’t realize something like that could happen. So, the system might be bad at least in terms of not meeting their intended requirements. Your summary still applies as the whole thing was bad speculation with that much money thrown at an untested, investment concept. Should’ve been much smaller. More like bug bounty size with significant pen testing before activated.
Most people participating didn’t realize something like that could happen.
I certainly see your point. But at the same time, isn’t “no regulation, every person for themselves” an intended feature of the system? Maybe I’m way off base about who was participating in the DAO, but it seems to me that the crypto-currency set are pretty keen on the whole self-reliance bit.
True, true. They acted on their beliefs about not needing regulation. They paid for it or almost did.
It also defeats the entire purpose of a Smart Contract, or at least the selling point. Smart Contracts are truth right up until they aren’t in which case you aren’t any better than the authority driving system we have now. We’ve just shifted the authority to a company/community. Kind of makes me think “Authority is bad unless I’m the one (mostly) in charge”.
Exactly. Those in control right now are at least decent number of people who mostly keep things stable. Replacing them with one of these cryptocurrencies or DAO’s ususlly means a few still have power over the many. They’re just less reliable and liable than what came before.
It’s why I prefer a non-profit, multinational scheme with reputable people over all this crypto or DAO bullshit. People working within systems that worked using methods that worked.
There is an enjoyable PHK talk about electronic voting where he points out that having a system where opposing parties are guaranteed to be looking over each others shoulders is a lot more tractable than trying to solve it with electronically.
It sounds like a funny thing to picture. It also sounds like it lacks voter confidentiality. That’s a requirement in US system to reduce coercion and retaliation. I learned this in Schneier’s Applied Cryptography. The requirements were tougher than I thought with some contradictory.
Personally, Im against electronic voting since verifying it is either way too complicated or impossible depending on system. Optical-scan, paper ballots like those used for tests in schools is best route. Easy to use, quick scan, and easy to check.
One thing I wish Unix people would stop doing is ignoring the existence of PowerShell on Windows. I get that this guy is talking about find in particular, but I challenge shell aficionados to at least look at it, rather than state that Cygwin (which is rather painful to use especially at first) is the only way to get a real shell on Windows.
My first time I definitely saved time as a programmer was in Power Shell was for a very find style use case, specifically for finding a .docx that my brother had misplaced. Like awk on *nixen, PowerShell is a language you have access to on every Windows computer since Windows 7. If you’re going to be bring up shells on Windows, it’s very worth a look.
my problem with PowerShell is it’s always locked down so I cannot use it productively, that, double clicking, and no middle button pasting means Windows is just a frustration to use, for me :~(
Yeah, Windows still needs to work on its out of the box console experience. If you own or have admin on the machine that you’re using PowerShell on you should be able to unlock it via Set-ExecutionPolicy. I usually see advice for RemoteSigned.
What do you mean about the double clicking?
For what it’s worth, it’s worth installing ConEmu if you haven’t already and if you’re going to be on a Windows machine for more than a few hours. It allows for a lot of the terminal conveniences that you get on OSX/Linux, regardless of what shell you run in it.
I have used PowerShell on a few occasions but still very much prefer the Bash that comes with the Portable Git install on Windows. Although Bash might not be the best shell, which I don’t have any opinion about, it is quite usable, having sensible tab completion and history support. In addition, PowerShell often requires you to write multi line scripts for tasks that are easily achieved by a one-liner in bash.
I also default to Bash that comes with the Portable git install on Windows. I suppose I have two major complaints:
I suspect that the one-liner/mulitiline dichotomy is mostly around things where bash has utils aimed at a given pain point, where PowerShell may not at this point. That being said, PowerShell has been advanced a lot over time. PowerShell 1.0 in Windows 7 is far less mature than PowerShell 3.0 on Windows 8.1 or 10. Also, with the proper path setup, one can use those utils in PowerShell if you wanted to.
I tend to interact with Bash, and use PowerShell for things that interact with .NET or Windows Scheduled tasks.
Dismissing PowerShell out of hand, without at least acknowledging it.
I think that for a lot of people, the advantage of Bourne shells + basic UNIX utilities is that it is available on any Unix-like system. Although Powershell has been ported to other platforms, it requires a fairly large .NET runtime that people do not want to install on servers (AFAIK Mono AOT is still pretty limited in platform support).
As a result, Powershell is pretty much confined to Windows.
PowerShell’s cross platform story is weak, and mostly Windows Centric. But so was the author’s Cygwin comment.
PowerShell doesn’t really feel like a shell in the sense of being designed for interactive use. It may well be a fine programming language, but it’s just too verbose to use as an actual shell, IME.
He mischaracterized Beekums argument. What it really said was: given the choice between market leader and smaller competitor you should choose the latter but make sure you can easily switch away to the former if need be.
Same technique I’ve always pushed. I liken it to an economic version of redundancy in hardware. We know it can go wrong in any number of ways. We make sure we can switch over without losing our data or computations. Should do choice of software similarly where possible. The prevalence of open API’s, formats, and code helps a lot with that these days.
This easily leads to not using the features of the smaller competitor. The smaller competitor needs to give me a risk mitigation strategy, being able to switch away is just one, another one may be having the source available in some shape or form. Maybe there are even more that I cannot think of now.
I have real trouble with syntax highlighting. It makes everything unreadable to me. I can’t read most websites that display code because of the highlighting. Of course, acme, my text editor, lacks any syntax highlighting.
It’s interesting that I didn’t start like this. Before I discovered Plan 9, I used highlighting (usually) just like everyone else. I didn’t turn it on, but it was usually on by default in the vim configuration shipped by Linux and off by default in BSD’s nvi (or simply lacking the original vi shipped in Solaris). But if it wasn’t on already, I didn’t bother turning it on.
Now after I stopped using syntax highlighting for many years (close to a decade now), I simply can’t read code with highlighting on. If some colleague wants some help with some code, my first request is to turn nightclub mode off.
Oh yeah, I program using a proportional font too (Lucida Grande, 14 point): http://i.imgur.com/XovEU4g.jpg
Project Fortress (a programming language research project) proposed mathematical syntax for programming:
Mathematical syntax The idea of “typesetting” the code using LaTeX has received mixed reviews, and many potential users find the issue of keyboarding the extended Unicode character set daunting. Nevertheless, even in its more verbose ASCII form, the syntax of Fortress, especially the notion of treating juxtaposition as a user-definable operation, has proved to be very convenient.
I have used unicode greek characters in my code, which I find very helpful, especially when doing scientific programming. My colleagues didn’t like that however and the toolchain was complaining too.
I think that this might be unpopular and it’s certainly a bit bold, but I often thing that becoming a programmer rather than a mathematician has a lot to do with mathematical notation.
Also I want to point out that there is APL and that despite of my previous statement I’ve seen impressive things done with it.
Ken Perlin wrote a small post arguing that one approach is more procedural, while another is declarative (setting up a state of the world and shifting the world while preserving correctness).
As for APL, I wrote k professionally for about a year, and am happy to talk about that experience.
It’s been a while, but here are a few observations about k:
[x[i] for i in indices] would do in python. The only language I’ve seen that comes close to this power is MATLAB. LINQ might be able to do this as well, but almost certainly in a heavier manner.That was really insightful. Thank you!
I am not an acme user, but I considered trying it. What I always wondered about with those self-made commands. Are there like collections of them? I know they aren’t hard to make, but I think a collection could give you ideas, starting points, etc., kind of like a vim plugin or config. Even if I don’t go with huge packages usually (be it vim or zsh), I kind of like ideas that you find while looking at existing stuff.
Or is that generally discouraged for some reason and never fell important?
Acme users (all 10 of us) write their own commands when they need them. I don’t know of any collection of commands, sorry, what would be the point? Every environment is different. The philosophy behind acme (plus plumber) is that it’s an integrating environment of arbitrary Unix software (I’m counting Plan 9 as a Unix here), not an integrated environment. It makes it really easy to integrate any reasonably written Unix tool into your workflow, but it doesn’t really provide any such built-in integration by default.
You don’t need any collection of commands to use start using acme though. Usually on every new project I start with a blank environment. I add stuff to it only as needs become apparent.
Okay, this is what I expected. I just wondered whether my assumption was correct that people do it that way. Also wondered if there are some scripts, not so much for reuse, but as example, so “smart tricks” on how to do something in a simple way, because I like to read code. :)
Thank you!
It’s pretty common to have a+ and a- for increasing/decreasing indentation, and c+ and c- for commenting/uncommenting. Beyond that I’m not aware of any conventions.
I have obtained some of mine from other people. Someone on a different lobste.rs thread shared Clear with me (to wipe a terminal).
I just finished making the first version of https://github.com/kori/flare , which is just a silly small thing, but it’s good for learning, I guess!
Next, I want to work on a notification daemon that receives notifications from libnotify and just prints them to the terminal.
I’m a new programmer and that’s pretty hard.
More interesting than my early programs in QBASIC as I was learning. Don’t worry about how amateur work look to others: just keep learning, building, and improving. :)
Work: Working towards a webfronted for a simulator. The simulator is a system that has evolved over 30 years with bits of Fortran, C++, C# and now Clojurescript. ! Work: Trying to write a duplicate file/directory finder and cleaner. It doesn’t go well because I get an hour at night and my brain is fried, so I am not really effective.
Now do the same in javascript: http://gamedev.stackexchange.com/questions/30727/implement-fast-inverse-square-root-in-javascript
I agree with the author, git concepts are non-orthogonal and subtle. When I think of quality UI software, I don’t think of git.
That being said, I would not get rid of the index. The index is the primary reason I use git. I think git add -p is an invaluable tool. If you don’t use it, you’re missing out.
I agree that git is awful to learn, but removing staging seems unnecessary. Even worse, they use funky flags where git uses commands, and they did away with push/pull in favor of publish/???.
I’m not impressed. I’ll stick with git and will continue to teach people how to use git instead of some seemingly better shell.
Yeah, it didn’t really feel to me like they changed much fundamentally. They just added some superficial aliases so you don’t have to run as many commands. I’ll write my own aliases when I need them.
I’m perfectly happy using mercurial, which doesn’t have a staging area. Whenever I use git, I get annoyed because it has this extra useless concept. Mercurial has “hg commit -i”, which is analogous to “git add -p”, to commit only a subset of the changes to the working directory.
I’ve always found a lot of utility in being able to stage certain things at once. Sometimes I make large changes without thinking about actual commits until I get to the end of a workstream, and having that all as one commit just seems generally not great, especially when there are atomic components that were tangentially related to the main work at best (like cleaning up a configuration file, or refactoring a particularly hairy section of code).
Mainly breaking things up like that seems like the courteous thing to do for your fellow developers, because that way if you’re ripping out a large component and want to do it quickly with reverts, you don’t pull out the secondary changes by accident too. And you can also generally write better, more relevant commit messages too.
You can do that workflow in mercurial as well using “hg histedit” and “hg commit –amend”.
Having a staging area isn’t related to the workflow and just adds another concept to learn when learning or teaching the tool.
You can also do git commit --amend if you want to amend a commit, however you don’t have a tool for implementing a commit queue in hg.
If I have four things I’ve touched and I want to commit them separately, I want to verify that each of these things are in fact separate commits. I do not want to commit four things, and then try to uncommit them, and re-commit them once I’ve figured out what they should look like.
In git, I can add -p the first one, then stash the rest, run my program and verify it works as I expect, then commit just that staged bit (or change it some more). Then I can pop the stash and move on to the next commit.
This concept is useful to me. I can emulate it by creating four separate branches, or by copying my files around, but I find it actually useful to think in terms of the git concept of a stage so that this kind of committing is obvious to me. Tools that generate new intuition to the programmer are like candy to me.
Given that I very much use hg with a commit queue, I can’t really agree, though I haven’t tried it much with the base command set (I use evolve).
Still, the way I do something like that is to make my commits, then update to the first commit and test, then update to the next commit (by hash or using the children(.) revset) and test, etc. Then I can land any or all of them with one push. None of that requires evolve, btw, but in practice I will usually find that I’ve mixed up the changes a bit and have to move pieces between them, and that’s much easier to do with evolve and ©histedit.
You’re using the stage+stash as your queue / working area. I’m using the actual dag. When doing so, it’s easy to get mixed up with what is “current work” vs stuff I’m basing the current work on, but hg has phases, which make it easy to distinguish work in progress vs base. (I uses aliases for it, but hg log -r 'not public() and only(.)' gives you a basic patch queue listing.)
In git, I can add -p the first one, then stash the rest, run my program and verify it works as I expect, then commit just that staged bit (or change it some more). Then I can pop the stash and move on to the next commit.
I find this the most confusing part of git, fwiw. I can never remember whether stash is going to stash changes that have been added or just changes that have not. Worse, stashing and unstashing seems to change which things have been added, which breaks my mental model of what stash is supposed to do.
which breaks my mental model of what stash is supposed to do.
This is a variation on the “X isn’t intuitive”, and it doesn’t really offer any option to me in having a discussion about it.
Can I agree that the user-interface is bad without also agreeing that we need to hide/get-rid-of the index?
I can never remember whether stash is going to stash changes that have been added or just changes that have not. Worse, stashing and unstashing seems to change which things have been added
Read the manual page carefully. The first sentence begins: Use git stash when you want to record the current state of the working directory and the index, but want to go back to a clean working directory and that’s exactly what it does, but you need to understand what the index is.
The thing that I’m referring to is mentioned: If the --keep-index option is used, all changes already added to the index are left intact and that -p implies this.
This is a variation on the “X isn’t intuitive”, and it doesn’t really offer any option to me in having a discussion about it.
Ok, I’ll be specific. The man page says: “Remove a single stashed state from the stash list and apply it on top of the current working tree state, i.e., do the inverse operation of git stash save. The working directory must match the index.”
$ git status
# On branch master
# Changes to be committed:
# (use "git reset HEAD <file>..." to unstage)
#
# modified: foo
#
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: bar
#
$ git stash
Saved working directory and index state WIP on master: b12ed82 test
HEAD is now at b12ed82 test
$ git stash apply
# On branch master
# Changes not staged for commit:
# (use "git add <file>..." to update what will be committed)
# (use "git checkout -- <file>..." to discard changes in working directory)
#
# modified: bar
# modified: foo
#
no changes added to commit (use "git add" and/or "git commit -a")
This is not what the word “inverse” conventionally means; it is extremely confusing to have an operation described as an “inverse” that doesn’t actually invert the thing it is supposedly the inverse of.
Can I agree that the user-interface is bad without also agreeing that we need to hide/get-rid-of the index?
I don’t think so, unless you have a better proposal for how to improve it.
it is extremely confusing to have an operation described as an “inverse” that doesn’t actually invert the thing it is supposedly the inverse of.
I don’t see why that means I have to agree that the index is bad.
Can I agree that the user-interface is bad without also agreeing that we need to hide/get-rid-of the index?
I don’t think so, unless you have a better proposal for how to improve it.
“People understand things I don’t, so we need to get rid of them”.
I find that attitude really off-putting.
I don’t see why that means I have to agree that the index is bad.
Well, the part that it doesn’t invert is the index. As far as I can see the index causes this problem and removing it would solve the problem.
“People understand things I don’t, so we need to get rid of them”.
That’s not what I’m saying (and I’ll thank you not to put words in my mouth). To agree the user-interface is bad, you must believe a better user-interface is possible (otherwise the idea of “bad” is meaningless). To believe that a better user-interface is possible you must believe that a specific possible alternative user-interface is better. If you don’t believe gitless is better, which possible alternative user-interface do you believe is better?
This is where git stash –keep-index comes in handy. Stash then only would apply to changes to bar.
Sure, and apparently there’s also a flag to the load to restore changes to the index. But neither of these is the default; fundamentally, stash is clearly confused about whether it’s operating on the index or the working copy or both. If even git’s own commands can’t get staged vs unstaged straight, what hope do us poor users have?
I won’t argue that git is the most intuitive tool by any means, but it does have the ability to address the needs at the least.
That said, the git log man page to this day, after 9 years of use still contains things I don’t understand after having read through it numerous times.
Both the mq and the shelve extension implement commit queues for hg and they come with hg’s default installation. Commit queues are a power user so it’s fine if this isn’t front and center for a new user yet but they are definitely there.
Tools give us power and allow us to be individually more efficient, but they require we learn new skills which can take time and make us struggle which is not something many intermediate programmers are used to.
To paraphrase:
I’m perfectly happy using basic which doesn’t have lambdas or macros. Whenever I use lisp, I get annoyed because it has this extra useless concept. Basic has goto and gosub which is analogous to the use of lambadas and functions that I want to use.
The real issues are whether, having this tool of thought you are more efficient (I am; others say they are) enough to make it worth learning (Others are more convinced than me on this point) or worth fighting inertia (i.e. Git is popular, so it is useful to think in git so that I’m better able to work in git).
As a specific example, I never want to hg commit -i in git – I’ve checked my history and I’ve never typed git commit -p (which works the same way), and I noticed whenever I use git add -p it’s followed by a git stash so I’m pretty certain my goal is to make sure that my single staged commit works in isolation (i.e. if someone just cherry picks this commit, will it work?) This is important because we don’t really have patches in git (which actually solve this problem).
I wonder how we can better talk about this conflict. Is the investment of learning worth the trouble in later efficiency? We talk about usability often without putting it into context. Git is worth learning for a developer, because you use it everyday. The banking website that you use once a month should be much more aligned with the concepts that you can be expected to know, even if that makes you ineffective.
This is something I often wonder about.
Too many people respond to things they do not understand with an opinion about them: How much they don’t like them. When I see someone do something I cannot do, I want to learn how they do it; If someone can program faster, and their programs are shorter or more correct than mine, I want to know how.
If they say “I use the git index”, then I know I need to learn it. Whatever opinion I might have had goes right out the window at that point.
The need is not to have a stage or stash or whatever. As the paper says, there are highlevel goals that developers want to accomplish. The stage and stash are tools that can be used to accomplish those goals, but there are a number of other ways. And you can design systems with more or fewer separate concepts. I prefer systems with fewer concepts that can be used in flexible ways. For that reason, I prefer using the straight dag as much as possible, and figuring out the minimum necessary extensions needed to make the dag handle my goals. The stage feels like a big separate concept to me – not wrong, really, but big and different and I’d rather eliminate it if possible. Similarly, the stash feels like it explodes out the set of possible states unnecessarily. You need some extra states, but I’d rather express them in terms of dag nodes than some completely different thing.
Honestly, I’d like to think of my working directory state as just another node in the dag, one whose hash and exact contents are only determined when necessary. I should be able to use it the same way as any other node in the graph. True, if you go whole hog on immutability, then you need to record a hash for every single keystroke that modifies a file, but I like to think of not having those nodes as an optimization on top of the conceptual dag containing them. And it turns out that you kind of want the same optimization for other nodes – for example, if you rebase a patch stack on top of upstream changes a dozen different times, then all those intermediate nodes (almost length of patch stack x number of rebases of them) are similarly uninteresting, and I would happily discard them even if it means edges in the dag sometimes represent more than one intermediate (garbage collected) node.
The need is not to have a stage or stash or whatever.
I certainly don’t have this need, to “not have a stage or stash or whatever” and so I reject the theory that this need exists.
there are highlevel goals that developers want to accomplish
I think this kind of thinking leads to mistakes.
I don’t want to accomplish any of them: I want to show my code to someone, or I want my feature to be on the production system. Everything else is a means to that end, and so I look at how I can do the things that I want. Tools enable that.
I prefer systems with fewer concepts that can be used in flexible ways.
If we say we should prefer hg because it is simpler than git, we are making a mistake. neither “git” nor “hg” are a calculus for version control, but both a set of very messy methods. If you want something mathematically satisfying – the scheme of version control, you should look at patch theory because it may fill that need.
However I have things to do now, and that means I’m looking at the common lisps of the world, not the schemes: I want a robust vocabulary, so I can say what I mean, and get what I want; succinct enough to avoid errors, but I do not want to do version control in brainfuck either.
Both git and hg are robust, but git is more robust. That’s important to me. Can you tell a computer what to do without an index? Sure, but there was a time when programmers used punch cards and paper tape. I certainly don’t want to return to that.
I’d like to think of my working directory state as just another node in the dag,
I’m not interested in this at all.
I am not making most of the arguments you are disagreeing with. In fact, we are saying the same thing in your first two quotes. But never mind that.
While to me, hg feels conceptually simpler yet equally powerful to git, that was not my point. (Nor did I say it – where did I mention hg?)
I am saying that a small number of concepts, flexibly applied, results in a better tool than one built from a larger set of nonorthogonal concepts that cover the same space (ie, the space of tasks you actually need to accomplish.)
It is also the case that once you use a tool successfully for long enough, it gets harder to distinguish what you’re doing from how you’re doing it, and alternative ways of accomplishing the same thing appear to be pointless and overcomplicated, because you are evaluating them according to how well they mimic your current tools rather than how well they accomplish the task at hand. Perhaps you are doing that here. Perhaps I am too unfamiliar with the git toolkit to see some large advantages it has.
hg feels conceptually simpler yet equally powerful to git, that was not my point. (Nor did I say it – where did I mention hg?)
Given that I very much use hg with a commit queue, I can’t really agree, though I haven’t tried it much with the base command set (I use evolve [a disabled-by-default extension that emulates something that has been in git since the beginning]).
…
I am saying that a small number of concepts, flexibly applied, results in a better tool than one built from a larger set of nonorthogonal concepts that cover the same space (ie, the space of tasks you actually need to accomplish.)
I’m not going to disagree with that theory, but it’s clearly not sufficient:
Git has a smaller number of concepts than hg: You can tell because things that are evocations of the alien things like the index require extensions in hg to be implemented.
Patch theory represents what might be the smallest possible number of concepts. It is worth some study and makes hg feel quite pointless because where patch theory also highlights something fundamentally wrong with git, hg has the exact same thing wrong.
It is also the case that once you use a tool successfully for long enough, it gets harder to distinguish what you’re doing from how you’re doing it, and alternative ways of accomplishing the same thing appear to be pointless and overcomplicated, because you are evaluating them according to how well they mimic your current tools rather than how well they accomplish the task at hand.
I think this is a silly way to think about things, but history is littered with people who cannot tell the difference between their opinions and reality.
I often stage changes at a time, make some more changes, stage those, etc., before finally having something I want to commit. (i.e. I don’t immediately follow git add -p with git commit) Sometimes this process itself is enough to make me change the direction of what I’m doing or decide I don’t want to make a commit at all.
I use “hg commit –amend” for that. If I end up deciding not to commit anything at all, “he strip” will delete the WIP commit.
The evolve extension also provides “hg uncommit”, to remove files from a commit.
git gui is my prefered way to use git. Gitg/gitk/cola are also invaluable. Github client was good originally, but thry simplified whioe also some how maing it harder for me to use.
I think git add -p is an invaluable tool.
And it gets even better with Magit, if you’re an Emacs user
I very much disagree with the representation of git. First, realize that git is just an immutable graph. Commits are nodes and the git commands simply manipulate these nodes. The staging area and the rest that the author describes is a lot more deducible if you consider the requirements for creating a new node.
I’ve not run into a problem helping people with a git problem that hasn’t been solved by drawing out the commits as a graph on the whiteboard.
drawing out the commits as a graph on the whiteboard.
Whether one thinks this is a good thing or a bad thing is a pretty accurate litmus test for git appreciation.
The TortoiseSVN client that I happened to use before using git did draw a graph of commits. But in SVN, you couldn’t easily manipulate it. This made the whole experience with SVN quite unintuitive. With git the internal model and the graphical representation had quite a few more things in common. For me, that made git much more intuitive than any other system I have used so far.
I saw this comment late last night, and it’s stuck with me. I think it’s very interesting.
I’ve always felt similar to @goalieca in that git has mostly made sense to me, even the commands people complain about. But then again I also tend to think of git as being very visual, even though it’s a command-line tool.
I envy you because I want to understand git’s ASCII graphs but even after years I cannot wrap my brain around what they mean and how they would “look” in a 3D space. I find that last bit is necessary for me to understand conceptual graphs.
Draw each commit as a node. A commit will have one arrow pointing into it (the previous commit). A merge commit will have 2. Each commit node has a diff based on the previous commit. For a merge it will be the resolution of the differences.
I tend to draw vertically with time as a sort of pseudo y-axis. Branches are horizontal.
For rebasing I like to draw the original line then cut that line and draw a new one for where I’m rebasing on. Same thing for cherry pick.
Git log and the ASCII graphs are confusing because it is a 1.5D projection.
Tesla’s pushed a bunch of really irksome “you don’t own this” mentalities with their cars. TOS'ing away how you use your vehicle is one. Another is access to their service manuals: you have to pay $3k a year for a subscription to the service manual! Also at one point even the option to pay for it was only open to Massachusetts residents (Mass has a “right to repair” law that by all sanity should be nationwide).
They may make a good product, but everything around their cars is profoundly anti-consumer. Please don’t support it.
It’s pretty poor behavior, agreed.
I look forward to the point where I have the option of literally not owning it (that is, access to a cheap on-demand service instead of putting down the capital up-front).
I feel like we lack good words to express the cultural upset of discovering that ‘ownership’ and ‘control’ are no longer the same thing (and may never be again).
The view of non-ownership spreading outside of tech is I think a realization by other companies, fueled from watching the tech sector, that copyright law is a legal construct ripe for abuse. Machines have relied on copyrighted materials forever, but back in the 80s no company left a footnote in their service manual saying “by reading this manual, you agree to only use Honda-brand replacement parts in the repair of this car.” Yet that’s basically what Tesla and others are relying on by shipping your car with a EULA: the car relies on copyrighted software to run, and they can set arbitrary usage requirements on that software, and thanks to more recent copyright developments you can’t even legally replace that software with something less insane. The response, I think, should be to legally disallow this “tainting” effect, so that any copyrighted material as part of a larger owned apparatus must be either open to any use or replaceable with something untainted.
‘Ownership’ and ‘control’ don’t have to be separate. We can still fix this.
Does copyright in the US actually limit people’s right to use software (as opposed to just make and sell copies or derivatives of it, etc.)? My understanding was that the licenses that start with “by using this software…” or “you don’t own the software, but a license to use it” fall strictly under contract law and as such have nothing to do with copyright. Am I wrong?
The situation in EU is exactly as you would’ve described though; copyright holders are given the exclusive right to control use.
They never were: owning the building vs. having the key. The guy with the key gets to enter, the guy with the paper that signifies ownership gets to sue for the key. But this has not really affected the common man and her property, because in order to operate things the key couldn’t be reasonably taken away.
The cultural upset will come when the kill switch is engaged too often and I expect that there will be a very fine line walked by the cooperations to avoid this situation.
There is a far more important reason to support Tesla though: climate change. Tesla is the only reason the automotive industry is (unwillingly) shifting towards EVs, but it’s far from a done deal, so continued success of Tesla is necessary until the shift to EVs is complete.
Climate change is certainly important, but Tesla isn’t our savior here, for two reasons.
First, while Tesla may have accelerated the trend, they’re now far from the only ones in the EV space. They’re just most techy/luxury brand. Now that there are other options on the market, you can totally avoid them for their shitty business practices.
Second, emissions reductions don’t actually depend so deeply on EVs. Cars & trucks currently make up 30% of US emissions[1], so if everyone went out and got PZEV vehicles (California-speak for a car with 90% less emissions than a normal vehicle), that would put transportation at 3% of US emissions, behind every other sector including agriculture. PZEVs aren’t necessarily hybrids or even remarkably fuel-efficient, many just have well-designed exhaust systems (modern Subarus, among others). This is good, since a car you can’t fill up at the local gas station is a complete non-starter for much of America.
Now we won’t quickly reduce transportation emissions by that much, for obvious reasons (much of transportation emissions are trucks that aren’t going anywhere, old cars will keep running for years yet, etc). Consider the marginal benefit climate-wise of getting a normal PZEV car vs. a very expensive luxury EV, as well as the personal cost of not being able to fill it up wherever. Even beyond Tesla, EVs are still in the upper range of “normal car” costs, and even electric motorbikes go for at least $10k.
EVs aren’t most of the answer to clean cars, at least not immediately. You do have good options outside them though.
I should point out that this doesn’t consider the carbon cost of oil extraction, refinement and transport. One estimate[1] pegs the cost of refinement at 2.5lbs of CO2 per gallon of gasoline, where burning that gallon produces 20lbs of CO2. Now CO2 isn’t the only or even most interesting greenhouse gas cars produce, but maybe that puts the effective total emissions reduction of a PZEV at around 80% ballpark. Keep in mind that much of that cost is incurred by power plants as well, so this isn’t necessarily a great advantage for EVs.
PZEV is about pollutant emissions, and has nothing to do with CO2. PZEV vehicles are beneficial for smog, but have essentially zero impact on global warming.
(Also, and this is an extremely minor nitpick, your math is off: if you reduce 30% by 90%, the remainder is actually 4%, because the total is now only 73% of its original value.)
You’re right about PZEV not covering CO2, thank you, I missed that. However, it does not follow that they make no difference in global warming. A revised estimate based on exhaust gas makeup here and Global Warming Potential of various compounds here: a spherical car in a vacuum emits 415 grams/mile of CO2 (GWP 1) and 1.39 grams/mile of NOx (GWP ~280) for a total of 804 Penguins Killed per Mile. NOx is covered by PZEV, so cutting the NOx emissions by 90% leaves you at 454 PK/m, or a total of a 46% reduction in greenhouse effect from a non-PZEV vehicle. It would seem we were both about halfway from right :).
So to effectively reduce your greenhouse impact without getting an EV, you want a car that’s both fuel-efficient and low-emissions. Hybrids look like a better investment now. Another option is recent-model motorcycles such as this thing, which gets 100+mpg and comes with such revolutionary enhancements as a catalytic converter and not-carbeurator. Be wary of older bikes though: mine, for point of comparison, gets 40mpg and is party to neither of these newfangled technologies, and so is rather less than stellar environmentally.
(And yes, my math was off. I will consider the nit picked :)
edit: the linked bike is actually a tiny thing that tops out at 50mph. Serves me right for skimming. There’s a good collection of bikes that get 70+mpg and are highway-capable, so the point still stands. Small bikes and scooter might also be a valid option, depending.
I would caution against engaging in thought experiments which bear little relation to reality. The problem is far too urgent and dangerous to be saying “oh yeah, if we did this hypothetical thing which doesn’t work, we would solve it”. It trivialises the magnitude of the problem and provides misleading reassurance.
In reality, emissions of CO2 need to go down to zero, not reduce by some vague percentage. And in reality, you can’t put everyone or even the majority of people on motorbikes (consider families with children, for one thing).
Anyway, with your revised figures, I think you reinforced my point about the importance of Tesla and EVs generally.
I have done some audio programming, and am studying engineering, so I guess I have some knowledge about it. There are many who are better than me, though. I hope this isn’t too mathematical, but you need to have some grasp on differentiation, integration, complex numbers and linear algebra anyway. Here’s a ‘short’ overview of the basics:
First of all, you need to know what happens when an analog, continuous signal is converted to digital data and back. The A->D direction is called sampling. The amount of times the data can be read out per second (the sampling rate) and the accuracy (bit depth) are limited for obvious reasons, and this needs to be taken into account.
Secondly, analysing a signal in the time domain doesn’t yield much interesting information, it’s much more useful to look analyse the frequencies in the signal instead.
Fourier’s theorem states that every signal can be represented as a sum of (co)sines. Getting the amplitude of a given freqency is done through the Fourier transform (
F(omega) = integrate(lambda t: f(t) * e^-omega*j*t, 0, infinity)). It works a bit like the following:f(t)bye^-omega*j*t, omega is the pulsation of the desired frequency, i.e.omega = 2pi*f, andjis the imaginary unit.jis used more often thaniin engineering.)(Note: the Fourier transform is also known as the Laplace transform, when substituting
omega*jwiths(orp, orz, they’re “implicitely” complex variables), and as the Z-transform, when dealing with discrete signals. It’s still basically the same, though, and I’ll be using the terms pretty much interchangably. The Laplace transform is also used when analyzing linear differential equations, which is, under the hood, what we’re doing here anyway. If you really want to understand most/everything, you need to grok the Laplace transform first, and how it’s used to deal with differential equations.)Now, doing a Fourier transform (and an inverse afterwards) can be costly, so it’s better to use the information gained from a Fourier transform while writing code that modifies a signal (i.e. amplifies some frequencies while attenuating others, or adding a delay, etc.), and works only (or most of the time) in the time domain. Components like these are often called filters.
Filters are linear systems (they can be nonlinear as well, but that complicates things). They are best thought of components that scale, add, or delay signals, combined like this. (A
z^-1-box is a delay of one sample, the Z-transform off(t-1)is equal to the Z-transform off(t), divided byz.)If the system is linear, such a diagram can be ‘transformed’ into a bunch of matrix multiplications (
A,B,CandDare matrices):state [t+1] = A*state[t] + B*input[t]output[t ] = C*state[t] + D*input[t]with
state[t]a vector containing the state of the delays att.Analyzing them happens as follows:
Z{x(t)}=X(z)) and the output signal (Z{y(t)}=Y(z)).YandXis a (rational) function inz, the transfer functionH(z).A.However, if the poles are outside of the unit circle, the system is ‘unstable’: the output will grow exponentially (i.e. “explode”). If the pole is complex or negative, the output will oscillate a little (this corresponds to complex eigenvalues, and complex solutions to the characteristic equation of the linear differential equation).
What most often is done, though, is making filters using some given poles and zeros. Then you just need to perform the steps in reverse direction.
Finally, codecs simply use that knowledge to throw away uninteresting stuff. (Eg. data is stored in the frequency domain, and very soft sines, or sines outside the audible range are discarded. With images and video, it’s the same thing but in two dimensions.) I don’t know anything specific about them, though, so you should look up some stuff about them yourself.
Hopefully, this wasn’t too overwhelming
:). I suggest reading Yehar’s DSP tutorial for the braindead to get some more information (but it doesn’t become too technical), and you can use the Audio EQ Cookbook if you want to implement some filters. [This is a personal mirror, as the original seems to be down - 509.]There’s also a copy of Think DSP lying on my HDD, but I never read it, so I don’t know if it’s any good.
Interesting post. I wanted to highlight this part where you say it’s limited for “obvious reasons.” It’s probably better to explain that since it might not be obvious to folks trained to think transistors are free, the CPU’s are doing billions of ops a second, and everything is working instantly down to nanosecond scale. “How could such machines not see and process about everything?” I thought. What I learned studying hardware design at a high-level, esp on the tools and processes, was that the digital cells appeared to be asleep a good chunk of the time. From a software guy’s view, it’s like the clock signal comes as a wave, starts lighting them up to do their thing, leaves, and then they’re doing nothing. Whereas, the analog circuits worked non-stop. If it’s a sensor, it’s like the digital circuits kept closing their eyes periodically where they’d miss stuff. The analog circuits never blinked.
After that, the ADC and DAC tutorials would explain how the system would go from continouous to discrete using the choppers or whatever. My interpretation was the digital cells were grabbing a snapshot of the electrical state as bit-based input kind of like requesting a picture of what a fast-moving database contains. It might even change a bit between cycles. I’m still not sure about that part since I didn’t learn it hands on where I could experiment. So, they’d have to design it to work with whatever its sampling rate/size was. Also, the mixed-signal people told me they’d do some components in analog specifically to take advantage of full-speed, non-blinking, and/or low-energy operation. Especially non-blinking, though, for detecting things like electrical problems that can negatively impact the digital chips. Analog could respond faster, too. Some entire designs like control systems or at least checking systems in safety-critical stuck with analog since the components directly implemented mathematical functions well-understood in terms of signal processing. More stuff could go wrong in a complex, digital chip they’d say. Maybe they just understood the older stuff better, too.
So, that’s some of what I learned dipping my toes into this stuff. I don’t do hardware development or anything. I did find all of that really enlightening when looking at the ways hardware might fail or be subverted. That the digital stuff was an illusion built on lego-like, analog circuits was pretty mind-blowing. The analog wasn’t dead: it just got tamed into a regular, synthesizable, and manageable form that was then deployed all over the place. Many of the SoC’s still had to have analog components for signal processing and/or power competitiveness, though.
You’re right, of course. On the other hand, I intended to make it a bit short (even though it didn’t work out as intended). I don’t know much about how CPUs work, though, I’m only in my first year.
I remember an exercise during maths class in what’s probably the equivalent of middle or early high school, where multiple people were measuring the sea level at certain intervals. To one, the level remained flat, while for the other, it was wildly fluctuating, while to a third person, it was only slightly so, and at a different frequency.
Because of the reasons you described, the ADC can’t keep up when the signal’s frequency is above half the sampling frequency (i.e. the Nyqvist frequency).
(Interestingly, this causes the Fourier transform of the signal to be ‘reflected’ at the Nyqvist frequency. There’s a graph that makes this clear, but I can’t find it. Here’s a replacement I quickly hacked together using Inkscape. [Welp, the text is jumping around a little. I’m too tired to fix it.])
The “changing a bit between cycles” might happen because the conversion doesn’t happen instantaneously, so the value can change during the conversion as well. Or, when converting multiple values that should happen “instantaneously” (such as taking a picture), the last part will be converted a little bit later than the first part, which sounds analogous to screen tearing to me. Then again, I might be wrong.
P.S. I’ll take “interesting” as a compliment, I just finished my last exam when I wrote that, so I’m a little tired now. Some errors are very probably lurking in my replies.
You were trying to explain some hard concepts. I enjoy reading these summaries since I’m an outsider to these fields. I learn lots of stuff by reading and comparing explanations from both students and veterans. Yeah, it was a compliment for the effort. :)
Even though I learned about the Fourier transformation in University this video gave me a new intuition: https://www.youtube.com/watch?v=spUNpyF58BY
Thanks very much for your detailed reply :). The math doesn’t scare me, it’s just very rusty for me since a lot of what I do doesn’t have as much pure math in it.
I appreciate the time you put into it.
Speaking specifically of Fourier transform: it behaves well for infinite signals and for whole numbers of periods of strictly periodic signals.
But in reality the period usually doesn’t divide the finite fragment we have (and also there are different components with different periods). If we ignore this, we effectively multiply the signal by a rectangle function (0… 1 in the interval… 0…) — and Fourier transform converts pointwise multiplication into convolution (an operation similar to blur). Having hard edges is bad, so the rectangle has a rather bad spectrum with large amplitudes pretty far from zero, and it is better to avoid convolution with that — this would mix rather strongly even frequencies very far from each other.
This is the reason why window functions are used: the signal is multipled by something that goes smoothly to zero at the edges. A good window has a Fourier transform that falls very quickly as you go away from zero, but this usually requires the spectrum to have high intensity on a wide band near zero. This tradeoff means that if you want less leak between vastly different frequencies, you need to mix similar frequencies more. It is also one of the illustrations of the reason why a long recording is needed to separate close frequencies.