I’ve been playing around with Stimulus. It is really cool.
The main difference with Stimulus verses other JS Frameworks is that Stimulus aims to manipulate existing DOM elements verses adding them in. With the combo of Turbolinks, it can be very powerful.
I think that a lot of it will boil down to what is the best way to react to these kinds of situations for an app that becomes “legacy” based on the guidelines of the article.
If this is a production, revenue generating, application then there are many more variables at play that could affect the overall bottom margin. Both a rewrite and bringing code up to current standards (either adding missing tests, maintaining the application or putting best practices in place) will cost money.
With a rewrite, you may release the software as a new product and sunset the legacy one. If you do not have an automatic migration plan in place, you are basically giving the end users opportunity to shop competitors and may lose recurring revenue.
With maintaining the application, some bad practices may still exist and be a continual pain of technical debt. This could lower morale depending on how bad it is. This could in turn create sloppy code by devs who just don’t care and want to get the job done.
It all depends on the app’s current state and what the best route for the company and clients will be. A rewrite may be the right path, while maintaining the older app might be the right path.
I’m actually pretty conflicted about this because if a tool is justified and needed (passes all checks from security/devops/it/etc) and is installed on a system, how often do you update? Update whenever a new version is released? Wait a week or a month? Never update? Update only when told to update from IT or whomever?
But then what about personal computers where you do not have some sort of safeguard. People always say to update to the latest version of software to keep secure and to get the bug fixes. However, in this case, it would have been potentially bad. A lot of damage could have been done in that short amount of time.
On the bright side of things, at least you had 17k auth tokens to lose. :)
I agree for the most part. However, having long method names have similar smelliness. I think that create_task is very readable. The fact that keyword arguments are used helps reduce confusion in the relevant parts of code. I.e., create_task("Something", send_email: false) is clear and concise. Thecreate_task` method could be an entry point where based on the flags, the other later mentioned (convert to private) methods are called to keep the logic clean and readable.
create_task("Something", send_email: false) is clear and concise. The
I find this approach really interesting. Personally, From a RoR perspective, I find this approach the easiset to maintain and to read. It keeps the important bits of the logic isolated into its own namespace and keeps the application fairly clutter free.
First set a memoized helper in the application_controller.rb called permitted_params. This will now be accessible in the controllers.
@permitted_params ||= Params::PermittedParams.new(params, current_user)
Within the controllers, I can use permitted_params.user instead of the params.
# users_controller.rb create action
@user = User.create(permitted_params.user)
I’ll then have a separate folder for all of the strong params logic. Within the folder, I’ll create a new params for each ActiveRecord model.
class Params::PermittedParams < Struct.new(:params, :current_user)
Since our permitted_params helper takes in the params and the current_user, we can use that in our user.rb methods. I’ll access the params and first required the [:user] parameter to exist and then permit from a private method called user_attributes.
This private method will return an array. We can build out the allowed parameters here as well as having access to limit what parameters a user can write to. In this case, I would only allow the admin attribute to be written to by user input if the current user is already an admin.
.tap do |attributes|
attributes << :first_name
attributes << :last_name
attributes << :admin if current_user.admin?
So this object ends up being a sort of global registry of params logic per-model? Hmm, I have some thoughts around this, thanks for sharing!
I’ve used it in small apps as well as larger ones and it’s kept the code fairly clean. Since it’s also leveraging memoization, having it called multiple times in a controller/view/presenter keeps the footprint the same. I would probably have a concern if my app had thousands of AR models, but so far it’s been pretty efficient.
I’m no security expert but somehow this seems like a bad idea. What if the user is mobile and switches IPs? Besides IP can be spoofed. For actions with side effects you don’t even need reply.
There can absolutely be caveats to this. However, I’ve had several use cases where clients want to restrict employees from using certain functionality of the software from outside of their offices. They often have a static IP address, but the ones who don’t and refuse to pay for it do have issues from time to time. This can be modified to allow a range or subnet of addresses and the ISP should be able to provide a pool. Restriction by IP Address alone is typically not secure enough and should be a compliment to proper authorization and authentication. Again, this is all very situational and the use case can be limited depending on the scope of the project.
Hi and welcome to Lobsters! Would you mind enganging in some way other than promoting your ruby thing? Like, maybe by commenting or something?
We appreciate technical content, but we appreciate actual community involvement even more.
Lobsters is not your marketing channel. :)
Thanks for the comment. I do need to get around to it. Since it is currently a free site and any/all content/hosting costs is absorbed by me personally, I didn’t see an issue. The only revenue it generates is from monetization on youtube which is rather tiny. Regardless, I will poke around here a bit.