Nothing fancy here. But, planning to create a blog where I am going to write a small tutorial like intro to domain specific language. Planning to publish on every Sunday or any other day, haven’t decide yet. I have created a github repo for now and listed out around 20 DSLs so I don’t stop after some posts. I am currently writing on Haml.
Guido has stay away from the discussion. I think that is good. Because, if Guido give advice and something goes wrong with that advice then people may blame Guido!
Few weeks ago I created a script for my personal static site generator[0]. Now, I am planning to write the complete documentation[1] on how I have created it.
[0]. https://github.com/chauhankiran/bajana [1]. https://dev.to/chauhankiran/developing-ssg-using-node-56c2
They are removing bars from the screen one at a time. They removed title bar ( I think that’s okay ), then they remove menu bar of application ( that has been talked in this article ). After that they will remove the system menu bar when application lauch and also they make the application status bar ( which normally display at the bottom of the application screen optional ). But fow what? To give more space to main window or to remove the other obstacle from users of the system to make them focusable ( if “professionaly” said! ) But, we already have F11!
Does this means Rust dumping a lot assembly code than C by providing some feature over the developer side?
Please consider my above thought as humble question, I haven’t even write a single program in Rust but I found lots of good word about it from the community and may be in future I will give a try.
But, I found that FF is fast now but taking huge memory ( > 750 MB while running with couple of tabs ) and process ( 54% of total process ) on my Pentium system. It just my thought that Rust provide good abstraction by giving an easiness way to write system code but dumping lots of code that make it huge and processor lover! Don’t consider me negative, I may wrong, but just asking you for further explore.
There’s still so little Rust in Firefox compared to the whole codebase that that shouldn’t be the sole issue with something like this.
In general, it should be roughly the same as C or C++, not significant more. Sometimes it’s less!
All these points mentioned in the post are also applied to C except latest language standard revision. Also, C have C11.
Why I am pointing out C? Because I am still not fan of C++ syntax.
I think it is a stretch to say C is in active development. It is at best in maintenance mode.
C++ is in active development.
It looks like C is on track to possibly get a new published standard around 2021/2022. It also seems to me that C has always been a significantly simpler language than C++. Where C++ is getting everything and the kitchen sink, making an already complex language even more complex, C has less to change and therefore changes less frequently.
One barrier here is that Microsoft has seemingly decided to stop working on C compatibility with MSVC; it doesn’t even fully support C99 yet, let alone C11. A new standard doesn’t matter much if one of the largest platforms in the world won’t support it.
A new standard doesn’t matter much if one of the largest platforms in the world won’t support it.
These days I would not be much surprised if Microsoft would replace MSVC with clang or even GCC.
Why? My impression is that the MSVC compiler is quite good. I only use the linker daily, not the compiler itself, but especially recently, I’ve only heard good things. Very different than ten or even five years ago.
Why?
A project manager making their numbers look better on the compiler side by using less programmers and moving at higher velocity. The reason: clang or GCC are doing most of that work now with MSVC a front end for them.
I’m sorry, I’m finding this reply really hard to parse.
Are you saying, people will move compilers because they want to use the new standard, which brings benefits?
And what’s this about MSVC being a front-end for Clang?
You asked why Microsoft would ditch their proprietary compiler that they pay to maintain in favor of a possibly-better one others maintain and improve. I offered cost cutting or possibly-better aspects as reasons that a Microsoft manager might cite for a move. Business reasons for doing something should always be considered if wondering what business people might do.
Far as front end part, that was just speculation about how they might keep some MSVC properties if they swapped it out for a different compiler. I’ve been off MSVC for a long time but I’d imagine there’s features in there their code might rely on which GCC or Clang might not have. If so, they can front end that stuff into whatever other compiler can handle. If not and purely portable code, then they don’t need a front end at all.
I am writing a simple book on SQLite - funSQL.
This is a good step. But, I personally not agree with nowadays languages that most of them do not have backward compatibility.
I respectfully disagree. I think as advancements are made in languages it’s only natural that you’re going to reach a point where additions or changes will force incompatibilities. It’s a natural, albeit sometimes painful, part of progress.
What are these languages without backward compatibility? From what I can tell, Go and Rust both seem to maintain backward compatibility pretty well.
Frankly, python’s backwards compatibility isn’t bad in my opinion. Outside of the python 2 and 3 differences, there isn’t really much to complain about.
I’m mostly annoyed that one interpreter can’t handle both 2 and 3 code. The changes are small enough this seems totally reasonable.
In terms of syntax I might agree with you, but under the hood it changed enough that’s it’s acceptable.
Go and Rust are both very young, and neither has even had a major version increase yet. Combined they have a much smaller installed base than Python and therefore fewer people driving new changes. They’re also tightly controlled by corporations who are likely to take a conservative stance on compatibility.
Most older languages have had backward compatibility issues. C++, for example, has added keywords, deprecated and removed auto_ptr, made changes to how lambdas behave, etc. Ada made major changes between Ada83, 95, and 2005, which are mostly compatible, but incompatible in some corner cases.
Nobody likes breaking compatibility, but refusing to do so implies the language is perfect or that the users must live with mistakes forever.
See Vala (programming language) on Wikipedia. It’s a language for GNOME that’s C#-like and compiles to C. I would like to recommend to the author to ask a native speaker to proof-read the book. It’s quite readable, but the lack of articles is a bit distracting (I know this is hard).
Also worth noting is that it compiles to C with GObject at the center of it’s object system. It has it’s benefits like quite easy interfacing with dynamic languages, but for me GObject is too crazy. Maybe I’m prejudiced, but it’s like painfully manual C++ although more dynamic. At this point for me it would be better idea to just write in C++.
However Vala hides all this, so mostly one sees good parts.
Yes, but I don’t want to throw technical details at first without even writting a simple hello, world program.
Yes. Hence I submitted link over here. Someone who has good experience with Vala can take a look.
Definately, along with source code and GUI chapters. But, currently I am stuck on story. I mean story of making Pupil.
I think it comes down to, if someones reading your code, they’re trying to fix a bug, or some other wise trying to understand what it’s doing. Oddly, a single, large file of sphaghetti code, the antithesis of everything we as developers strive to do, can often be easier to understand that finely crafted object oriented systems. I find I would much rather trace though a single source file than sift through files and directories of the interfaces, abstract classes, factories of the sort many architect nowadays. Maybe I have been in Java land for too long?
This is exactly the sentiment behind schlub. :)
Anyways, I think you nail it on the head: if I’m reading somebody’s code, I’m probably trying to fix something.
Leaving all of the guts out semi-neatly arranged and with obvious toolmarks (say, copy and pasted blocks, little comments saying what is up if nonobvious, straightforward language constructs instead of clever library usage) makes life a lot easier.
It’s kind of like working on old cars or industrial equipment: things are larger and messier, but they’re also built with humans in mind. A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch–this is similar to how new cars are all built with heavy expectation that either robots assemble them or that parts will be thrown out as a unit instead of being repaired in situ.
You two must be incredibly skilled if you can wade through spaghetti code (at least the kind I have encountered in my admittedly meager experience) and prefer it to helper function calls. I very much prefer being able to consider a single small issue in isolation, which is what I tend to use helper functions for.
However, a middle ground does exist, namely using scoping blocks to separate out code that does a single step in a longer algorithm. It has some great advantages: it doesn’t pollute the available names in the surrounding function as badly, and if turned into an inline function can be invoked at different stages in the larger function if need be.
The best example of this I can think of is Jonathan Blow’s Jai language. It allows many incremental differences between “scope delimited block” and “full function”, including a block with arguments that can’t implicitly access variables outside of the block. It sounds like a great solution to both the difficulty of finding where a function is declared and the difficulty in thinking about an isolated task at a time.
It’s a skill that becomes easier as you do it, admittedly. When dealing with spaghetti, you only have to be as smart as the person who wrote it, which is usually not very smart :D.
As others have noted, where many fail is too much abstraction, too many layers of indirection. My all time worst experience was 20 method calls deep to find where the code actually did something. And this was not including many meaningless branches that did nothing. I actually wrote them all down on that occasion for proof of the absurdity.
The other thing that kills when working with others code is the functions/methods that don’t do what they’re named. I’ve personally wasted many hours debugging because I skipped over the funtion that mutated that data it shouldn’t have, judging from it’s name. Pro tip; check everything.
Or you can record what lines of code are actually executed. I’ve done that for Lua to see what the code was doing (and using the results to guide some optimizations).
Well, I wouldn’t say “incredibly skilled” so much as “stubborn and simple-minded”–at least in my case.
When doing debugging, it’s easiest to step through iterative changes in program state, right? Like, at the end of the day, there is no substitute for single-stepping through program logic and watching the state of memory. That will always get you the ground truth, regardless of assumptions (barring certain weird caching bugs, other weird stuff…).
Helper functions tend to obscure overall code flow since their point is abstraction. For organizing code, for extending things, abstraction is great. But the computer is just advancing a program counter, fiddling with memory or stack, and comparing and branching. When debugging (instead of developing), you need to mimic the computer and step through exactly what it’s doing, and so abstraction is actually a hindrance.
Additionally, people tend to do things like reuse abstractions across unrelated modules (say, for formatting a price or something), and while that is very handy it does mean that a “fix” in one place can suddenly start breaking things elsewhere or instrumentation (ye olde printf debugging) can end up with a bunch of extra noise. One of the first things you see people do for fixes in the wild is to duplicate the shared utility function, and append a hack or 2 or Fixed or Ex to the function name and patch and use the new version in their code they’re fixing!
I do agree with you generally, and I don’t mean to imply we should compile everything into one gigantic source file (screw you, JS concatenators!).
I find debugging much easier with short functions than stepping through imperative code. If each function is just 3 lines that make sense in the domain, I can step through those and see which is returning the wrong value, and then I can drop frame and step into that function and repeat, and find the problem really quickly - the function decomposition I already have in my program is effectively doing my bisection for me. Longer functions make that workflow slower, and programming styles that break “drop frame” by modifying some hidden state mean I have to fall back to something much slower.
I absolutely agree with you that when debugging, it boils down to looking and seeing, step by step, what the problem is. I also wasn’t under the impression that you think that helper functions are unnecessary in every case, don’t worry.
However, when debugging, I still prefer helper functions. I think it’s that the name of the function will help me figure out what that code block is supposed to be doing, and then a fix should be more obvious because of that. It also allows narrowing down of an error into a smaller space; if your call to this helper doesn’t give you the right return, then the problem is in the helper, and you just reduced the possible amount of code that could be interacting to create the error; rinse and repeat until you get to the level that the actual problematic code is at.
Sure, a layer of indirection may kick you out of the current context of that function call and perhaps out of the relevant interacting section of the code, but being able to narrow down a problem into “this section of code that is pretty much isolated and is supposed to be performing something, but it’s not” helps me enormously to figure out issues. Of course, this only works if the helper functions are extremely granular, focused, and well named, all of which is infamously difficult to get right. C’est la vie.
Anyways, you can do that with a comment and a block to limit scope, which is why I think that Blow’s idea about adding more scoping features is a brilliant one.
On an unrelated note, the bug fixes where a particular entity is just copied and then a version number or what have you is appended hits way too close to home. I have to deal with that constantly. However, I am struggling to think of a situation where just patching the helper isn’t the correct thing to do. If a function is supposed to do something, and it’s not, why make a copy and fix it there? That makes no sense to me.
It’s a balance. At work, there’s a codebase where the main loop is already five function calls deep, and the actual guts, the code that does the actual work, is another ten function calls deep (and this isn’t Java! It’s C!). I’m serious. The developer loves to hide the implementation of the program from itself (“I’m not distracted by extraneous detail! My code is crystal clear!”). It makes it so much fun to figure out what happens exactly where.
A lot of code nowadays (looking at you, Haskell, Rust, and most of the trendy JS frontend stuff that’s in vogue) basically assumes you have a lot of tooling handy, and that you’d never deign to do something as simple as adding a quick patch
I do quick patches in Haskell all the time.
Ill add that one of the motivations of improved structure (eg functional prigramming) is to make it easier to do those patches. Especially anything bringing extra modularity or isolation of side effects.
I think it’s a case of OO in theory and OO as dogma. I’ve worked in fairly object oriented codebases where the class structure really was useful in understanding the code, classes had the responsibilities their names implied and those responsibilities pertained to the problem the total system was trying to solve (i.e. no abstract bean factories, no business or OSS effort has ever had a fundamental need for bean factories).
But of course the opposite scenario has been far more common in my experience, endless hierarchies of helpers, factories, delegates, and strategies, pretty much anything and everything to sweep the actual business logic of the program into some remote corner of the code base, wholly detached from its actual application in the system.
I’ve seen bad code with too many small functions and bad code with god functions. I agree that conventional wisdom (especially in the Java community) pushes people towards too many small functions at this point. By the way, John Carmack discusses this in an old email about functional programming stuff.
Another thought: tooling can affect style preferences. When I was doing a lot of Python, I noticed that I could sometimes tell whether someone used IntelliJ (an IDE) or a bare bones text editor based on how they structured their code. IDE people tended (not an iron law by any means) towards more, smaller files, which I hypothesized was a result of being able to go-to definition more easily. Vim / Emacs people tended instead to lump things into a single file, probably because both editors make scrolling to lines so easy. Relating this back to Java, it’s possible that everyone (with a few exceptions) in Java land using heavyweight IDEs (and also because Java requires one-class-per-file), there’s a bias towards smaller files.
Yes, vim also makes it easy to look at different parts of the same buffer at the same time, which makes big files comfortable to use. And vice versa, many small files are manageable, but more cumbersome in vim.
I miss the functionality of looking at different parts of the same file in many IDEs.
Sometimes we break things apart to make them interchangeable, which can make the parts easier to reason about, but can make their role in the whole harder to grok, depending on what methods are used to wire them back together. The more magic in the re-assembly, the harder it will be to understand by looking at application source alone. Tooling can help make up for disconnects foisted on us in the name of flexibility or unit testing.
Sometimes we break things apart simply to name / document individual chunks of code, either because of their position in a longer ordered sequence of steps, or because they deal with a specific sub-set of domain or platform concerns. These breaks are really in response to the limitations of storing source in 1-dimensional strings with (at best) a single hierarchy of files as the organising principle. Ideally we would be able to view units of code in a collection either by their area-of-interest in the business domain (say, customer orders) or platform domain (database serialisation). But with a single hierarchy, and no first-class implementation of tagging or the like, we’re forced to choose one.
Storing our code in files is a vestige of the 20th century. There’s no good reason that code needs to be organized into text files in directories. What we need is a uniform API for exploring the code. Files in a directory hierarchy is merely one possible way to do this. It happens to be a very familiar and widespread one but by no means the only viable one. Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway. We could just store that on disk as a single structured binary file with a library for reading and modifying it.
Yes! There are so many more ways of analysis and presentation possible without the shackles of text files. To give a very simple example, I’d love to be able to substitute function calls with their bodies when looking at a given function - then repeat for the next level if it wasn’t enough etc. Or see the bodies of all the functions which call a given function in a single view, on demand, without jumping between files. Or even just reorder the set of functions I’m looking at. I haven’t encountered any tools that would let me do it.
Some things are possible to implement on top of text files, but I’m pretty sure it’s only a subset, and the implementation is needlessly complicated.
IIRC, the s-expr style that Lisp is written in was originally meant to be the AST-like form used internally. The original plan was to build a more suggared syntax over it. But people got used to writing the s-exprs directly.
Exactly this, some binary representation would presumably be the AST in some form, which lisp s-expressions are, serialized/deserialized to text. Specifically
It happens to be a very familiar and widespread one but by no means the only viable one.
Xml editors come to mind that provide a tree view of the data, as one possible alternative editor. I personally would not call this viable, certainly not desirable. Perhaps you have in mind other graphical programming environments, I haven’t found any (that I’ve tried) to be useable for real work. Maybe you have something specific in mind? Excel?
Compilers generally just parse all those text files into a single Abstract Syntax Tree anyway
The resulting parse can depend on the environment in many languages. For example the C preprocessor can generate vastly different code depending on how system variables are defined. This is desirable behavior for os/system level programs. The point here is that in at least this case the source actually encodes several different programs or versions of programs, not just one.
My experience with this notion that text is somehow not desireable for programs is colored by using visual environments like Alice, or trying to coerce gui builders to get the layout I want. Text really is easier than fighting arbitrary tools. Plus, any non text representation would have to solve diffing and merging for version control. Tree diffing is a much harder problem than diffing text.
People who decry text would have much more credibility with me, if they addressed these types of issues.
That’s literally true! I am work with some of the old code and things are really easy. There are lots of files but all are divided into such an easy way.
On the other hand, new project that is divided into lots of tier with strick guidelines, it become hard form me to just find a line from where bug occur
My suggestion would be how to write a useful, robust Makefile (or CMake project, whatever) to build a Vala project. I haven’t written Vala in a while, but I remember the build process for multi-file projects to be a major pain point when I was writing it.
Agreed with @steveno. elementary has done good work over developer docs. Regarding, meson, elementary also looking forword to work with it.
A fairly robust CMake implementation exists. I’m currently using it. The Elementary project has put a good amount of effort into it.
With all that said, though, I would instead suggest tutorials on using Meson with Vala. That’s the direction the Gnome project is headed.
I’ve been using Vala for a long time now. See my pet project. I’ve seen several projects like this start, then fail. Here are my suggestions:
For now I do not have any economic plan with it. Here is a GitHub for this book. You ( and any interested people ) are welcome to contribute. Although, contribution guideline are remain to written.
Sure. I will keep your point #2 in mind.
I highly suggest that you start a mailing list to announce changes to this book, since what we have here is just an outline, and you’re going to want a way to refresh people on changes.
I think the simplest solution in terms of time invested per output would probably be Mailchimp, but I’m no expert on the matter..
[Comment removed by author]
Thanks. I have tried couple of times before. see this repo and old folder. But, previously I didn’t created outline for more chapters. But, now I think I have.
In past year, I really realise that home much I have wasted on phone without any meaningful work.
With that thing in mind I started to spend less time on my phone with following actions:
Is there any place where I can find the story ( or blog ) of author where he has shared his view on why s/he created a Linux distro. I really interested to read about these stories on why we have lots of distro ?