Just curious, are you able to create a new Rails app yet, or is there work to be done there? And does it run reasonably well?
Yes. I was able to create a Rails app and get that up and running. I don’t know about speed as I didn’t have any complicated running on the rails app. Just a simple scaffold and such.
I gave up after a few minutes. These DTKs exist so that developers of all the apps he complains about can be ported. Why review beta OS on Beta hardware and say at this point that the situation is bad. Of course it is bad - nobody has dine the work yet.
Fair enough. I originally had much higher hopes for the DTK and do understand the purpose of them. I thought that it was going to be a much different video than what it turned out to be. However, it is what it is for now. I spoke to the community about my findings and they still wanted to see the video.
Based on the demo, I would have expected Rosetta to be able to run every app as is. These are early days though, so I’m not really concerned.
I am really curious how people will manage docker images. Are they going to start building all their images to support arm & amd64? Or will people prefer to use cloud servers that run on arm? Or will this be enough to convince people to switch away from Apple Laptops for development work?
If you look at the DTK release notes, they document an issue with page size support in the DTK hardware only, which will not exist in the final hardware, and which prevents Rosetta from running apps which expect to do memory protection operations on 4K pages. This includes just about any App that relies on JIT compilation - this means browsers and Electron apps of course.
They were apparently not exaggerating about the bit about the DTK not being representative of final hardware.
Re: Docker, well, you could run an x86-64 Linux VM inside Qemu on an aarch64 Mac…
The wwdc architecture talk said that the Rosetta on the DTK has page size restrictions, so some uses of mmap, etc fail.
Mono’s JIT runs under Rosetta once they recompiled with 16k page support
It’s really interesting. So, the default included Ruby version is 2.6.3 and it is Universal.ARM based. However, when installing Ruby from ASDF, it was Intel based. So, we could probably keep developing using a version manager Ruby install if it is going through Rosetta.
The whole docker issue will be up for debate. I bet we’ll be able to run x86 docker containers and things will work as if we were on an Intel processor. Things might just be a bit slower. From a consumer of a cloud service perspective, I just don’t see the benefit right now to using ARM. I’m sure we’ll get to a point where it is a viable option, but currently, it’s no cheaper than x86.
Lately, I’ve been considering going back to Windows/Linux for my development machine. Haven’t moved on this though as my current rig still has plenty of years left.
Another good one I’ve found is the “Auto-open DevTools for popups” check box in the settings. I used to work in AdTech and this is very helpful when attempting to log and debug full HTTP request cycles between client and host.
super cool. Thanks for sharing
Pretty cool! This is pretty much what I did when I had to make a nested form in Stimulus, but I didn’t use a <template> tag. Does that get hidden by default in the browser?
The browser does hide the template tag. Very handy when working with front end stuff!
ActionMailbox looks really interesting
It is. Definitely not for every use case, but when needed, it’ll be very helpful. I covered it in a screencast a few weeks ago. https://www.driftingruby.com/episodes/using-action-text-in-a-rails-5-2-application
I highly highly recommend Magnet. It’s like Spectacle, but also supports snapping when dragging windows with the mouse. And I feel like it’s easier to put windows into thirds with it for some reason.
I’ll check it out. Window snapping is really nice!
I’ve been playing around with Stimulus. It is really cool.
The main difference with Stimulus verses other JS Frameworks is that Stimulus aims to manipulate existing DOM elements verses adding them in. With the combo of Turbolinks, it can be very powerful.
I think that a lot of it will boil down to what is the best way to react to these kinds of situations for an app that becomes “legacy” based on the guidelines of the article.
If this is a production, revenue generating, application then there are many more variables at play that could affect the overall bottom margin. Both a rewrite and bringing code up to current standards (either adding missing tests, maintaining the application or putting best practices in place) will cost money.
With a rewrite, you may release the software as a new product and sunset the legacy one. If you do not have an automatic migration plan in place, you are basically giving the end users opportunity to shop competitors and may lose recurring revenue.
With maintaining the application, some bad practices may still exist and be a continual pain of technical debt. This could lower morale depending on how bad it is. This could in turn create sloppy code by devs who just don’t care and want to get the job done.
It all depends on the app’s current state and what the best route for the company and clients will be. A rewrite may be the right path, while maintaining the older app might be the right path.
I’m actually pretty conflicted about this because if a tool is justified and needed (passes all checks from security/devops/it/etc) and is installed on a system, how often do you update? Update whenever a new version is released? Wait a week or a month? Never update? Update only when told to update from IT or whomever?
But then what about personal computers where you do not have some sort of safeguard. People always say to update to the latest version of software to keep secure and to get the bug fixes. However, in this case, it would have been potentially bad. A lot of damage could have been done in that short amount of time.
On the bright side of things, at least you had 17k auth tokens to lose. :)
I agree for the most part. However, having long method names have similar smelliness. I think that create_task is very readable. The fact that keyword arguments are used helps reduce confusion in the relevant parts of code. I.e., create_task("Something", send_email: false) is clear and concise. Thecreate_task` method could be an entry point where based on the flags, the other later mentioned (convert to private) methods are called to keep the logic clean and readable.
create_task("Something", send_email: false) is clear and concise. The
I find this approach really interesting. Personally, From a RoR perspective, I find this approach the easiset to maintain and to read. It keeps the important bits of the logic isolated into its own namespace and keeps the application fairly clutter free.
First set a memoized helper in the application_controller.rb called permitted_params. This will now be accessible in the controllers.
@permitted_params ||= Params::PermittedParams.new(params, current_user)
Within the controllers, I can use permitted_params.user instead of the params.
# users_controller.rb create action
@user = User.create(permitted_params.user)
I’ll then have a separate folder for all of the strong params logic. Within the folder, I’ll create a new params for each ActiveRecord model.
class Params::PermittedParams < Struct.new(:params, :current_user)
Since our permitted_params helper takes in the params and the current_user, we can use that in our user.rb methods. I’ll access the params and first required the [:user] parameter to exist and then permit from a private method called user_attributes.
This private method will return an array. We can build out the allowed parameters here as well as having access to limit what parameters a user can write to. In this case, I would only allow the admin attribute to be written to by user input if the current user is already an admin.
.tap do |attributes|
attributes << :first_name
attributes << :last_name
attributes << :admin if current_user.admin?
So this object ends up being a sort of global registry of params logic per-model? Hmm, I have some thoughts around this, thanks for sharing!
I’ve used it in small apps as well as larger ones and it’s kept the code fairly clean. Since it’s also leveraging memoization, having it called multiple times in a controller/view/presenter keeps the footprint the same. I would probably have a concern if my app had thousands of AR models, but so far it’s been pretty efficient.
I’m no security expert but somehow this seems like a bad idea. What if the user is mobile and switches IPs? Besides IP can be spoofed. For actions with side effects you don’t even need reply.
There can absolutely be caveats to this. However, I’ve had several use cases where clients want to restrict employees from using certain functionality of the software from outside of their offices. They often have a static IP address, but the ones who don’t and refuse to pay for it do have issues from time to time. This can be modified to allow a range or subnet of addresses and the ISP should be able to provide a pool. Restriction by IP Address alone is typically not secure enough and should be a compliment to proper authorization and authentication. Again, this is all very situational and the use case can be limited depending on the scope of the project.
Hi and welcome to Lobsters! Would you mind enganging in some way other than promoting your ruby thing? Like, maybe by commenting or something?
We appreciate technical content, but we appreciate actual community involvement even more.
Lobsters is not your marketing channel. :)
Thanks for the comment. I do need to get around to it. Since it is currently a free site and any/all content/hosting costs is absorbed by me personally, I didn’t see an issue. The only revenue it generates is from monetization on youtube which is rather tiny. Regardless, I will poke around here a bit.