Well, that looks amazing. It looks like a virus capsid.
Ugh, sorry I will have to complain about SemVer again. I tried SemVer, but later dropped in favor of CalVer. CalVer is used in applications to regular users (take Photoshop and Visual Studio, for example), but I’ve been using with my library, and it works nice.
Am I on the latest version? Do I have to upgrade? Do I have the most recent security patches?
One of the reasons I dropped SemVer is that it let the user with no clue of how old their software (application or library) is. In CalVer this is explicit. It obviously doesn’t tell if the user is using the latest version, but they might recognize that the software they are using is too old.
From these three examples, you can see that semantic versioning is a very concise and structured system for versioning a release. To be fair, there’s a number of further rules in addition to what I’ve covered here. However, these are the basics.
Unfortunately, SemVer doesn’t rule only over the release step, but rule over the development too. If anyone here contribute to third-party OSSs, you might have received an answer like “I’m sorry, but I can’t make this change until next major release”. By then, many changes already happened that your patch already have a lot of conflicts with the current master branch, and even if the conflicts are resolved, your patch might not even work correctly with all the changes that happened in the master branch (example 1 of SemVer ruling over development). I stopped contributing to some OSSs, one of them being a significant OSS, because I had patches waiting for a year.
In CalVer, there’s no rule here. If you find a good way to keep the backward compatibility, good. If you don’t, you still have the choice of merging the patch and informing in the release notes that a break was made.
How do I know when to release 1.0.0? If your software is being used in production, it should probably already be 1.0.0. If you have a stable API on which users have come to depend, you should be 1.0.0. If you’re worrying a lot about backward compatibility, you should probably already be 1.0.0.
I haven’t seen this rule applied with consistency anywhere. Some software stay in <1.0.0 forever. Others have rules like using the x.1.z as beta or alpha.
Have an open release schedule (that changes gradually)
The example used (Ubuntu) is hardly a good example for this. One table contains the past releases, and at the bottom there is the future release. Future release date is good, but shouldn’t be prioritized (example 2 of SemVer ruling over development) over getting the job done, even if it takes a longer time.
Be consistent and predictable… If your release schedule releases a new major release every twelve months, do your best to stick to it.
That’s just SemVer trying to be CalVer. And failing at it, since users will present new bugs and feature requests that will require you changing your schedule.
Communicate changes regularly and transparently What’s new What’s been fixed What’s been improved What’s been deprecated
Same rules goes to CalVer. Communication is obviously a must. But, it always looks awkward to me when there’s a major release >=2.0.0 for a software, and there’s only a single backward break. While in others there are many.
I remember when Linus was questioning (on G+) if the version of Linux should roll faster. It wasn’t a question of “is what we are currently doing correct and need change?”, it was more like “hey, what about we go faster?” And, I remember he also talked about the interval between the previous major releases and that they had already reached that same interval, indicating that maybe they should also bump the major version (another example of using SemVer as CalVer, but worse).
The version numbers were just numbers. The rules could be broken. And that’s because Linux is a kind of software that can’t be held back by the rules of a version scheme.
Ending my what sounds like a rant.
My CalVer scheme is YYYY.MM.DD.MICRO (could be using only YY for year, too). The MICRO I use for special cases, when I need to make two releases at the same day.
I’m so glad they decided to make this open source. I wonder why a custom version of wine was needed or is this just a configuration wrapper on wine to make it work easier?
Both, it seems. Proton (https://github.com/ValveSoftware/Proton) seems to be a configuration wrapper + wine, but the wine version is their own (https://github.com/ValveSoftware/wine).
It’s funny, because I googled “why women pants no pocket” to figure out why this is the case and the first result says that it’s because men who dominate the fashion industry don’t want women to have pockets.
Why don’t all the women who want pockets get together, start a company to make highly-pocketed pants, tap this unmet demand and make billions? You know, scratch your own itch and all.
Since when it’s that easy to start a company? Not any kind of company, a factory of mass production.
That’s a very immaterial look at the subject.
Our status quo is patriarchal. Besides an 8h job, which pay less compared with men, women are the ones that do the housekeeping and child care. Now women should simply start a factory to make these billions.
Oh come on. There are plenty of women who own and operate their own businesses. And women are having kids later and marrying later, meaning more free time to start a company.
And you don’t have to start with a giant production. In fact, men don’t start that way either. You start small and grow.
And the wage gap has been debunked so many times that it’s absurd to even bring it up.
Oh come on. There are plenty of women who own and operate their own businesses. And women are having kids later and marrying later, meaning more free time to start a company.
And you don’t have to start with a giant production. In fact, men don’t start that way either. You start small and grow.
The logic you are using is women should sacrifice even more of their time to solve, in local scale, a global scale systemic problem.
And the wage gap has been debunked so many times that it’s absurd to even bring it up.
It’s not absurd to bring it up at all, because it’s real:
Gender gap per country, The Global Gender Gap Report 2017, World Economic Forum, page 8
The logic you are using is women should sacrifice even more of their time to solve, in local scale, a global scale systemic problem.
Well, yes. That’s how businesses on average work. This is also how local change tends to go…ye olde “make a cup of tea instead of boiling the ocean”.
The logic you are using is women should sacrifice even more of their time to solve, in local scale, a global scale systemic problem.
Are women unable to solve their own problems? Why do you need men to solve it for you? Why do you think that men have some amount of free time that women don’t? The lack of availability of a product you want is not a systemic problem. Men are not keeping you from making this product in any way.
Regardless, women can use market pressure to solve the problem, if the demand is as high as some say it is. If a small business proves the demand for these jeans, the global retailers will quickly follow suit. Seems simple enough.
It’s not absurd to bring it up at all, because it’s real
I took a look at the report. The plot on page 8 does not show a wage gap, rather it is a chart showing the Global Gender Gap Index, an index which takes into account many more factors than income/wage. The index that you should have pointed me to was The Economic Opportunity and Participation subindex, described on page 5. From the name, we can already tell is is too broad to determine whether or not there is a wage gap for people doing the same job at the same level of experience. If we look at the definition of this subindex, we find that we are actually interested in the remuneration gap. However, the term “remuneration” and variations thereof are only mentioned in 3 places in the article, and there is no data on this gap specifically.
If we look at the page on the United States, we find a section called Wage Equality for Similar Work, which may be what we are looking for. This data comes from the WEF Executive Opinion Survey, 2017.
From the intro:
the Executive Opinion Survey is the longest-running and most extensive survey of its kind, capturing the opinions of business leaders around the world on a broad range of topics for which statistics are unreliable, outdated, or nonexistent for many countries.
So we see that this data is merely the opinion of the people who run these companies, and is not hard data (this much was admitted in the Gender Gap Index as well). If you want to be sure of this, here is link to the survey itself, which shows that it is only a survey of opinions. Check out question 11.18.
So I don’t think this report contains proof that men and women doing the same work at the same level of experience earn a different amount of money.
While it is true that the average man earns more than the average woman globally, this is explained by many factors, some of which are listed on the wikipedia page. Here is an article that explains how these various factors affect the pay gap. There are many others like it.
It is a systemic problem, because historically women have been restricted from economically dominant positions, for being women. The factors (discrimination, motherhood penalty and gender roles) in the Wikipedia article that you linked are examples of this systemic problem. But, I could’ve been more clear is that I’m not saying that women can’t solve their own problems, but that the first post “Why don’t all the women who want pockets get together…” has an intonation that completely disregards the responsibility of men and patriarchy in this. And put the whole responsibility on women.
I know the index is not only wage, but I wasn’t mentioning only wage in my previous posts. I agree with you this study is not the proof. But, it covers more countries than any other study that I’ve seen, especially Global South, which have a completely different reality than Global North countries, and why the “if you want to change this, you should just open a company that does the way you want” argument lack materialism.
My two cents regarding my personal projects…
When I was working on the MATHC 1.x.x, I was using SemVer, which has a concept of not breaking backwards compatibility while in that major version, possible of adding features in each minor version (x.MINOR.x), and having the patch version (x.x.PATCH) for fixes.
The SemVer process is very inorganic, and the development was often affected because of SemVer rules. This was the same thing I noticed when contributing on OSS projects of others. Having a critical design flaw and you just made a patch that will fix that? You will have to wait until the next major version, because it breaks backwards compatibility.
When I released MATHC 2, I switched to CalVer, and the scheme was YYYY.MM.DD.MICRO. The version was actually MATHC 2018.08.02.0. It’s a pretty straight forward scheme, as the version is just the dates. I added a MICRO for reserved use when I had to publish two versions on the same day, in case I noticed a bug right after the previous release. I always try to keep backwards compatibility, but the development of my project is not affected by it. If I have to break backward compatibility, then I will break it, and I will mention in the release notes. The versioning scheme doesn’t rule over the development process, it’s just a reflection of it. Very simple.
Regarding testing, I had test units during the first major version, but it was a lot of work for only one person to take care of, and it was hardly ever useful. I removed the test unit in the newer versions and I rely on the feedback of users to identify hidden bugs. The good thing about personal OSS projects is that you can easily make changes to prioritize your mental health over the project :)
We use calver at work as well, and if for nothing else, just not having discussions of what is a major, a minor or a patch release is SUCH a good thing.
I know, right? I wish other projects followed this scheme. After some time using CalVer, I’m starting to believe that SemVer is (to some extent) holding back the quality of software.
I mean, IF we could automatically determine whether a release is major, minor or patch by checking the public APIs, then that would be great, but there’s no way to do it reliably in most languages. Without this kind of tooling, it might make sense to use semver for libraries, but for systems that are used by humans, not really.
An extreme example of unportable C is the book Mastering C Pointers: Tools for Programming Power, which was castigated recently. To be fair, that book has other flaws rather than being in a different camp, but I think that fuels some of the intensity of passion against it.
This… rather grossly undersells how much is wrong with that book. The author didn’t understand scope, for crying out loud, and never had a grasp of how C organized memory, even in the high-level handwavy “C Abstract Machine” sense the standard is written to.
There are better examples of unportable C, such as pretty much any non-trivial C program written for MS-DOS, especially the ones which did things like manually writing to video memory to get the best graphics performance. Of course, pretty much all embedded C would fit here as well, but you’ll actually be able to get and read the source of some of those MS-DOS programs.
In so doing, the committee had to converge on a computational model that would somehow encompass all targets. This turned out to be quite difficult, because there were a lot of targets out there that would be considered strange and exotic; arithmetic is not even guaranteed to be twos complement (the alternative is ones complement), word sizes might not be a power of 2, and other things.
Another example would be saturation semantics for overflow, as opposed to wraparound. DSPs use saturation semantics, so going off the top end of the scale plateaus, instead of causing a weird jagged waveform.
As for the rest, it’s a hard problem. Selectively turning off optimization for specific functions would be useful for some codebases, but aggressive optimization isn’t the only problem here: Optimization doesn’t cause your long type to suddenly be the wrong size to hold a pointer on some machines but not others. Annotating the code with machine-checked assumptions about type size, overflow behavior, and maybe other things would allow intelligent warnings about stupid code, but… well… try to get anyone to do it.
Re “Mastering C Pointers,” that’s fair. I included it because it’s one of the things that got me thinking about the unportable camp, but I can see how its (agreed, very serious) flaws might detract from the overall argument I’m making and that there might be a better example.
Re saturating arithmetic, well, Rust has it :)
My interpretation is that the point of C is that simple C code should lead to simple assembly code. Needing to write SaturatedArithmetic::addWithSaturation(a, b) instead of just a + b in all arithmetic DSP code would be quite annoying, and would simply lead to people using another language.
You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behaviour. The only construct in C that can hide complexity is the function call, which everyone recognises. But if you see some arithmetic, you know it’s just arithmetic.
You could say ‘oh they should add operator overloading’, but then that contravenes the first point, that simple C code (like a + b) should not hide complex behavior. The only construct in C that can hide complexity is the function call, which everyone recognizes. But if you see some arithmetic, you know it’s just arithmetic.
Not to mention that not everything can be overloaded, causing inconsistencies, and some operations in mathematics have operators other than just “+-/*”. Vector dot product “·”, for example. Even if CPP (or any other language) extends to support more operators, these operators can’t be reached without key composition (“shortcuts”), making it almost unwanted. vec_dot() might require typing more, but it’s reachable to everyone, and operators don’t need to have hidden meanings.
Perl does have more operators than C, but all of them are operators that can be typed using simple key composition, such as [SHIFT+something]. String concatenation for example.
My point, added with what @milesrout said, is that some operators (math operators) aren’t easy to type with just [SHIFT+something]. As result, operator overloading in languages that offer operator overloading will always stay in a unfinished state, because it will only compromise those operators that are easily composed.
Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.
Herbert Schildt’s C: The Complete Reference is often touted as the worst C book ever and here.
Perhaps Mastering C Pointers is the worst in its niche (i.e., pointers) and Schildt’s is a more general worst?
Mastering C Pointers: Tools for Programming Power has several four-star reviews on Amazon uk.
So? One of the dangers of picking the wrong textbook is thinking it’s great, and using it to evaluate subsequent works in the field, without knowing it’s shit. Per hypothesis, if it’s your first book, you don’t know enough to question it, and if you think it’s teaching you things, those are the things you’ll go on to know, even if they’re the wrong things. It’s a very pernicious bootstrap problem.
In this case, the book is objectively terrible. Other books being bad don’t make it better.
I do agree that Schildt’s book is also terrible.
Sources are all over the place. xcb/xgb is notably poorly documented, but most of Xlib applies with slight modifications. Yet Xlib docs aren’t necessarily practical. It took me a good while yesterday to understand visuals and how to create an ARGB window (eventually StackOverflow helped me get past BadMatch by analyzing X.org sources) and paint a gradient on it with direct ARGB values, while knowing what I’m doing. X11 also has its deal of history. Right now I’m reading a random paper from 1994 http://www.rahul.net/kenton/perf.html as I’ve been trying to understand GraphicsExpose events.
I don’t know. I really hope I can find something readable on XRender. So far I have like:
and none of that is very instructional. Though it seems to have Cairo-level capabilities.
Working on one of my personal projects, a painting software.
I’m using my own window creation framework (with Wayland, X11 and Win32 backend), but I’m not using any hardware acceleration, and not using any widget library. Instead, I’m doing 2D software rendering, with a queue with clipping rectangles for redrawing only what is necessary, and the widgets that you see on the screenshot are just array of a “component” structure, that is iterated over and rendered on the sidebar.
Currently the application is using only the main thread to render on the window, but I want to add an option to use multiple threads. So, each thread will have a clipping rectangle on the window, then these clipping rectangles owned by each thread will intersect with the rectangles in the clipping queue. The result is each thread rendering only on their own space.
Screenshot with the “brush” tool sidebar, used to configure automated generated brushes:
I also run :)
Sadly it seems to happen if you make your own functions too, so it possibly happens with any case where the compiler thinks that that buffer won’t be used anymore.
I see that the article provides a solution at the end. But it seems to me a warning should be made by the compiler, instead of having to adapt all functions to prevent this from happening. Or be able to disable this specific optimization.
Yeah, after some reading it seems to be due to dead store optimization.
If the memory is not accessed after you store something there (i.e., memset), the compiler will reorder the instructions and the dead store is removed.
There was some interesting discussion on GCC’s bugzilla about this. Is that a bug or not? I think it is.
After some Googling I found the explicit_bzero function that @kristapsdz mentioned, but the glibc version. The trick it uses is to add asm volatile ("" ::: "memory") after the memset call. That prevents any stores prior to that point to be reordered.
I would add that repeating yourself, in some cases, is also essential in performance critical software, not just a matter of avoiding wrong abstraction or avoiding an ugly architecture design. I’m writing a painting application (like Photoshop, Krita, GIMP), and I could use the DRY philosophy on the function to plot the brushes, but I would have a serious performance drop if I did that, because the if/else abstraction would happen in the middle of a rasterization:
void plot(...)
{
while (row < bottom) {
while (col < right) {
if (plot.hardness == 100) {
/* Use simple plot */
} else {
/* Use plot with smoothness */
}
}
}
}
Now, imagine this with multiple parameters (hardness, roughness, density, blending, …). Instead, I copy-paste the function and rewrite with the specific algorithm inside of the nested loop.
You could also assign the drawing function to a function pointer (or lambda or whatever) outside of the loop and just call that within the loop. No branching in the loop, and no duplicated code.
But an indirect call (e.g. calling into a function pointer or a vtable, etc) is still a “branch” - it’s just one with an unknown number of known targets instead of two, which only adds more variables to the equation.
In order for this to be equivalent to inlining the duplicate-for-each-algorithm code, one would have to convince oneself that the indirect branch predictor on the processor is going to reliably guess the CALL and RET targets, that the calling convention doesn’t spill more registers than the inlined execution would (ideally it’s a leaf function so the compiler can elide call prologue/epilogues), and that the processor’s speculative execution system doesn’t have its memory dependency information invalidated by the presence of the call.
Caveat - the above might be less true if you’re programming in a managed runtime - if that function call can be inlined by the JIT compiler at runtime (many high-performance runtimes are very aggressive about function inliing, so it’s not an unrealistic thing to expect), then hopefully the above issues would be lessened.
If you get a chance, there’s a chapter in Beautiful Code that talks about runtime code generation for image processing that, IIRC, singles out stencling and plotting as a running example, that you might find relevant to your interests.
Takes several lines of code and a lot more brainpower in many programming languages.
I completely agree with this. I often work making pipelines to process metagenomic data, which is often a lot of data with a lot of intermediate steps, all to reduce the raw data into a simple table that can be easily understood by ecologists. At first, I used to make my pipelines with Lua (great language). The script used to be ~1500L. Pretty small for programming standards, but too big for a pipeline that consisted of around 7-10 steps. Later I translated the everything that could be translated from Lua to shell script, and I ended up with a shell script <100L and a Lua script with <200L (the Lua part that I couldn’t translate to shell script).
Operations like iterating over files inside a folder is much easier in shell script, and was something that I had to do again and again on every step of that pipeline. That’s why a lot of lines were reduced.
I don’t think shell script is so intuitive, so I always kept mine small, executing existing tools or my Lua scripts. If working with shell, I think this hybrid approach prevents you from falling in this “trap”.
Bad idea, it should error or give NaN.
1/0 = 0 is mathematically sound
It’s not mathematically sound.
a/b = c should be equivalent to a = c*b
this fails with 1/0 = 0 because 1 is not equal to 0*0.
Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.
There is a subtlety because some people say (X) and others say (Y)
(X) a/b = c should be equivalent to a = c*b when the LHS is well defined
(Y) a/b = c should be equivalent to a = c*b when b is nonzero
If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.
It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.
This is explicitly addressed in the post. Do you have any objections to the definition given in the post?
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values
That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.
Those truthy/falsey values are an often source of errors.
I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.
It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be +?, /?, *?, -?.
https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff
Sounds cool, so just drawing lines right to a buffer yourself? I got a project in mind to draw right to the buffer as well, but not even using SDL, we’ll see how that goes.
Yeah. I contemplated also just using OpenGL. Would you use a graphics library, or would you skip even that?
No library, just writing to whatever buffer the system lets me (using Handmade Hero for some inspiration/reference just to see if I can and to keep it simple.) But may go to OpenGL when I get other things working so I could use graphics hardware if I can.
Honestly when it comes to C I think going with a library is your best bet. Even SDL can be a bit bulky when it comes to game development. I’ve found that Allegro works great for game dev in C.
I would say don’t use hardware acceleration, use only software rendering. First, you are doing asteroids, so, it’s 2D, it and can be made with small resolution, so rendering will be fast enough. Second, you can create many weird effects with software rendering, that would be a bit harder using hardware acceleration. And finally, software rendering is fun.
OSS stuff. Working on my math library, then I will try to work a little more on my window creation framework, before releasing it.
A much better example, IMO, are these Markov generated Tumbler posts, trained with Puppet documentation and a collection of H.P. Lovecraft stories.
Poetic.
And this one looks like one of those quotes that become historical, but almost no one that uses it knows what it means:
I like King James Programming. Example: Exercise 3.63 addresses why we want a local variable rather than a simple map as in the days of Herod the king
Truth.