Change File/Folder Ownership to a Linux User on Different Linux Machine

CrazyWes

New Member
Joined
Jul 15, 2022
Messages
21
Reaction score
3
Credits
517
Hey there,

If this is not the correct place to post this issue. I tried to find the correct section but I did not see anywhere you are supposed to post this type of question. If it's not okay to post this here but you know where where I was supposed to post this, let me know and I'll recreate the post in the correct place. Forgive me as I am new to the forum here as well as new to linux.

So before I dive into what I need to do, I will tell the story that lead to this situation.
We currently have 3 Linux Machines running Debian. We are using Portainer.io in a docker swarm configuration. A while back, our NAS that was hosting the database died and we were not able to recover the data and had to restore from the most recent backup. However, our backup appliance was having an issue with restoring the files. Because of this we had to restore the files manually using a linux commands instead of the GUI of our backup appliance(our backup appliance is a linux machine). After restoring the backup, I then had to copy/paste from to our new NAS using windows file explorer. However when I did this, it somehow messed up a whackload of permissions/ownership. We fixed this issue with the rest of our Databases, but there is still one that is broken.
The DB in question is our Wekan DB that was hosted on our NAS appliance. The service in our portainer swarm, is not currently running because the database needs to be repaired, but the repair function wants to take ownership of the files/folders using the chown command. But the problem is that the files/folders are owned by a user on our NAS appliance(this NAS is also a linux machine, synology.) But the command that needs to be ran is a docker command that needs to be ran on one of the linux machines in our docker swarm.

"docker run -it -v /mnt/dockerdata/volumes/wekan-db:/data/db mongo:5.0 mongod --repair"

Because this machine doesn't have the authority to take ownership of the files/folders as they are owned by a linux user from a different linux machine.

"chown: changing ownership of '/data/db/index-144--4087297069838802221.wt': Operation not permitted" this is the output after running the command. This is recursive for every file/folder in the db.
So the task I am faced with is how do I restore ownership of the files/folders on the NAS to a user that exists on our other linux machine in the docker swarm so that the Mongodb --repair command will work, and we can restore our services.

Any help is appreciated. I am all ears. Thanks!
 


Man, read the title and think "sounds easy enough". Then BAM! A great big wall of text that's way above my pay grade.

But, can't you, as root, manually chown -R the permissions to a user on the local machine?
 
Man, read the title and think "sounds easy enough". Then BAM! A great big wall of text that's way above my pay grade.

But, can't you, as root, manually chown -R the permissions to a user on the local machine?
Yes I have the ability to chown -R on the NAS where the DB is stored and change to anyone I like so long as the user exists on that same NAS. The problem is I can't make a user from the other linux machine the owner. Because the command to repair the mongodb has to be ran on the Linux machine that is part of the docker swarm as the mongodb command is integrated into the docker command in this scenario and that is the only place where the command works.
I suppose I could try copying the DB to the other Linux machine thats part of the docker swarm, repair the DB there and then copy it back? But if I copy it back isn't that just going to reproduce the same problem? I have no idea lol.
And yeah trust me, this is above my pay grade too. That's why I am here lol
 
If you don't know the answer to the question, don't answer. We post questions for those that can gives new bies the right answers. We don't want opinions, we want solutions to our situation. thnk you.
 
If the way I described my situation is confusing or unclear, just let me know and I can try to re-word it. Thanks
 
Text wall... Now I know what it's like to be on the receiving end (I text-wall).

On-topic:
1. Ownership. Just get the id of the local user and chown by id. Generally, primary user uid and gid is 1000.
2. Re above, ensure group ownership is correct. Embedded *nix can be a nuisance. Example, a user's primary group is, say, "media", so their files' ownership may look like "1000:999". Android's genius is root user-owns everything, and "everybody" group-owns files, a' la "root:everybody". You may have doc sifting.
3. Check what groups the "user" belongs to as well.
4. Finally, find out what exactly the permissions should be.

These specs are usually on the software maintainer's site.

I hope I understood the question correctly.
 
Text wall... Now I know what it's like to be on the receiving end (I text-wall).

On-topic:
1. Ownership. Just get the id of the local user and chown by id. Generally, primary user uid and gid is 1000.
2. Re above, ensure group ownership is correct. Embedded *nix can be a nuisance. Example, a user's primary group is, say, "media", so their files' ownership may look like "1000:999". Android's genius is root user-owns everything, and "everybody" group-owns files, a' la "root:everybody". You may have doc sifting.
3. Check what groups the "user" belongs to as well.
4. Finally, find out what exactly the permissions should be.

These specs are usually on the software maintainer's site.

I hope I understood the question correctly.
Hey Fanboi,

Steps 1, 2 and 3 I have tried already prior to making this post and it did not work. I originally chowned by ID when I tried it.
Just to clarify further, when I ran the chown command, the user ID did not exist on the computer I was running the chown command on. But I used the same values that were on the other linux box. And it worked and let me change ownership despite the fact that the linux box I was running the command on did not have a user that existed with those IDs. But after I ran the Mondgod --repair command on there it still had the same result. docker run -it -v /mnt/dockerdata/volumes/wekan-db:/data/db mongo:5.0 mongod --repair
result was still
chown: changing ownership of '/data/db/index-144--4087297069838802221.wt': Operation not permitted
To clarify the two devices.
-Synology NAS - (Linux - I believe it is CentOS)
-VMware VM Running Linux Debian V11 - one of the Docker Swarm VMs
 
If you don't know the answer to the question, don't answer. We post questions for those that can gives new bies the right answers. We don't want opinions, we want solutions to our situation. thnk you.
In Project Management we have things called Swarms, where a group gets together to solve a problem. Each participant may not have the solution, but the input given can prompt a thinking process that leads toward the solution. Just because my response did not directly lead to a solution, that doesn't make valueless. Sometimes seemingly complex problems actually have simple solutions.

As well, I do not see where you provided valuable input toward a solution; rather you decided to spend your energy commenting on my post.

I suppose I could try copying the DB to the other Linux machine thats part of the docker swarm, repair the DB there and then copy it back? But if I copy it back isn't that just going to reproduce the same problem? I have no idea lol.
I feel like you've already thought of this, but before doing any more modifications or repairs, you should consider backing up the database, and perhaps the whole container. Then, if you do have a (more) catastrophic failure, you can restore.
 
In Project Management we have things called Swarms, where a group gets together to solve a problem. Each participant may not have the solution, but the input given can prompt a thinking process that leads toward the solution. Just because my response did not directly lead to a solution, that doesn't make valueless. Sometimes seemingly complex problems actually have simple solutions.

As well, I do not see where you provided valuable input toward a solution; rather you decided to spend your energy commenting on my post.


I feel like you've already thought of this, but before doing any more modifications or repairs, you should consider backing up the database, and perhaps the whole container. Then, if you do have a (more) catastrophic failure, you can restore.
The database basically already is in a state of catastrophic failure. I don't have a healthy backup I can restore to. So even if I could restore from a backup, it would be useless at this point because it still wouldn't work.
The issue definitely seems to be ownership/permissions. And like I said if I restore from backup, it will just restore with those same broken ownership/permissions that's causing it's failure in the first place. Backing up the DB at this point makes no difference as no read/writes have been made to the DB since.
At this point I am faced with two potential paths. 1. Fix the ownership/permissions so the MongoDB command can work and repair the database. 2. Create a brand new Wekan DB and tell our Eng team they need to start from scratch.
It is not the biggest deal in the world if we can't get it back as Wekan is just a projection management solution like a kanban board. The data hosted in there isn't totally critical to our business continuity, but it is frustrating and a little concerning to say the least if this same issue somehow occurs on a different DB and we don't know how to fix this issue in the future. But then again, with us having a brand new healthier NAS that can handle load, it's not as likely. And because our backup appliance is functioning properly now, there would most likely never be a need again to restore the DB in such a way that breaks the original permissions/ownership as this was quite a rare scenario that occured.
I think I am just going to try copying back and forth, and see if I can fix the permissions that way, and if that still does not work, I'll just have to let this one go and start over.
 
The database basically already is in a state of catastrophic failure. I don't have a healthy backup I can restore to. So even if I could restore from a backup, it would be useless at this point because it still wouldn't work.
The issue definitely seems to be ownership/permissions. And like I said if I restore from backup, it will just restore with those same broken ownership/permissions that's causing it's failure in the first place. Backing up the DB at this point makes no difference as no read/writes have been made to the DB since.
Even with your database in it's current state, you should back it up. Then if you operate on the database and end up even worse, you can revert back to it's pre-backed-up state. It potentially allows you to try different methods to recover your data.

Often, when someone has a failing hard drive, the first step is to attempt a bit-bit backup, before attempting to recover it. Then you can try to operate your recovery, either on the hard drive, or on the backup. But either way, you have a snapshot in time.
 
Even with your database in it's current state, you should back it up. Then if you operate on the database and end up even worse, you can revert back to it's pre-backed-up state. It potentially allows you to try different methods to recover your data.

Often, when someone has a failing hard drive, the first step is to attempt a bit-bit backup, before attempting to recover it. Then you can try to operate your recovery, either on the hard drive, or on the backup. But either way, you have a snapshot in time.
It is still being backed up on a schedule, so I'm all good. There has been no changes to that particular DB every time the backups ran so I have got plenty of slack to play around with it.
 
The DB in question is our Wekan DB that was hosted on our NAS appliance. The service in our portainer swarm, is not currently running because the database needs to be repaired, but the repair function wants to take ownership of the files/folders using the chown command. But the problem is that the files/folders are owned by a user on our NAS appliance(this NAS is also a linux machine, synology.) But the command that needs to be ran is a docker command that needs to be ran on one of the linux machines in our docker swarm.
Is that data from the nas shared by through nfs? Wouldn't it be an idea to create a user/group with that same name user on your nas user on your Linux system and then adding the the user used to repair the database to the group of the nas user on teh swap/linux system? That way you would technically give the repair users permissions through the group of the nas user/group if the the directory location has write permission for the group.
 
Is that data from the nas shared by through nfs? Wouldn't it be an idea to create a user/group with that same name user on your nas user on your Linux system and then adding the the user used to repair the database to the group of the nas user on teh swap/linux system? That way you would technically give the repair users permissions through the group of the nas user/group if the the directory location has write permission for the group.
Yes it is through NFS.
I am pretty sure I have tried this already, but I could be misunderstanding what your suggesting.
So you are saying, that the user that owns the files/folders on the NAS, to create that same user on the other Linux machine with the same name, etc. Then run the command as that user? Correct?
I believe I tried this already, but the other way around.
 
I am pretty sure I have tried this already, but I could be misunderstanding what your suggesting.
Not quite From what I am understanding the location where the database for wekan is located on is on an nfs share which is owned by a user on the nas and this same user is not available on the Linux system with swarmp/docker. Say the user that owns the file on the nas is called naswekan create that same user on the Linux system so with the same uid and gid, also on the nas create the same swarm user which is on the Linux system on the nas with the same uid and gid as on the Linux system. Then on both the nas and the Linux system add the swarm user to the group naswekan. That way if if the permissions are correct on the Linux side the swarm user should technically have write permissions on the share by being in the naswekan group. You may also want to change the permissions for the share on the nas side so that the group has read, write execute permission on the share. So in short it would look like this as an example:
On nas:
1. create user/group : username/groupname swarm
2. uid/gid: uid/gid: 1000 (replacing the actual uid/gid as on the Linux system for that user)
3. Add naswekan user to the group swarm
4. Change the permissions of the location of wekan db:
ownernship: chown naswekan:naswekan -R /data/db/wekandb
permissions to make sure the group has write access: chmod 775 -R /data/db/wekandb
On Linux system with swarm:
1. create user/group: username/groupname naswekan
2. uid/guid: 2000 (Replacing the actual uid/gid of that of the naswekan user on the nas)
3. Add user naswekan to the group swarm: usermod -aG naswekan swarm

The end situation should be that the location where the wekan database is at is owned by the user and group naswekan and the permissions are that this user can read and write on all files in this location. Then because this location is shared through nfs the user swarm indirectly has read/write permissions for this location because it being a known user on both sides and because of being in the group naswekan on both sides.

So then the swarm user should be able to write in wekandb location because the permissions would look like this.
User and Groups on Nas
Code:
user: naswekan:1000 group: naswekan:1000
user: swarm:2000 group: swarm: 2000
group members for: naswekan -> swarm
Permissions on the database location on the nas recursive
Code:
drwxrwxr-x. 1 naswekan swarm   0 Jul 18 21:15 wekandb
Because of the nfs users/groups/uid/gid's being the same those same permissions would be for the the swarm user. The swarm user then because being in the group naswekan on the Linux system with swarm will have write permissions because being in the group naswekan for the wekandb location.
 
The easier solution would probably the create a swarm user on the nas with the same username, uid and gid as on the Linux system. The change the permissions of the wekandatabase location to the swarm user on the nas, then indirectly the mounted location on the Linux swarm system would also change to the swarm user. What I described in my previous post is by doing it through user and group permissions and having the correct group permissions so that the swarm user can actually write in the location. By doing it this way it's probably less complicated and simpler since you aren't doing anything with groups then. This is what I would have done and I'm pretty sure it would work.
 
The easier solution would probably the create a swarm user on the nas with the same username, uid and gid as on the Linux system. The change the permissions of the wekandatabase location to the swarm user on the nas, then indirectly the mounted location on the Linux swarm system would also change to the swarm user. What I described in my previous post is by doing it through user and group permissions and having the correct group permissions so that the swarm user can actually write in the location. By doing it this way it's probably less complicated and simpler since you aren't doing anything with groups then. This is what I would have done and I'm pretty sure it would work.
The NAS doesn't allow me to use the useradd or adduser commands as it doesn't seem to be installed. It seems to only allow me to create users in its WebUI, but I can SSH into it, and use the chown command with no issues.
 
The NAS doesn't allow me to use the useradd or adduser commands as it doesn't seem to be installed. It seems to only allow me to create users in its WebUI
I know but you should be able to create a user through the GUI with a selected username and what uid/gid you want to set as well as being able add a user to a group. It was just as an example, It was more about the steps you need to take not how to actually do it because it doesn't matter if you do it through a gui or cli as long as the users and groups are created and the swarm user in is in the naswekan group on both sides and matching uid's and gid's.
 
It could be that I did not do it right, but it did not work.
On the NAS,

cat /etc/passwd
naswekan:x:1030:100:
cat /etc/group
naswekan:x:65536:naswekan

On the Docker Swarm Linux Machine,
cat /etc/passwd
naswekan:x:1030:65536:
cat /etc/group
naswekan:x:65536:naswekan
I was able successfully able to Chown the Database to naswekan user/group that exists on the NAS.
Unfortunately the user on the docker machine was not able to chown either even though its UID and GIDs are the same.
If this was supposed to work, then did I just become the worlds most elite hacker? Lol
 
What does the swarm user/group look like on both the nas and the Linux system? Also the naswekan user was an example and to be replaced whatever you were using as the nas user on your nas system as well as the swarm user which you actually have on your Linux system.
 
Last edited:
What does the swarm user/group look like on both the nas and the Linux system? Also the naswekan user was an example and to be replaced whatever you were using as the nas user on your nas system.
It looks exactly like I just posted above. I'll repeat it below. Unless you are asking about something slighlty different and I am misunderstanding your question.
"On the NAS,

cat /etc/passwd
naswekan:x:1030:100:
cat /etc/group
naswekan:x:65536:naswekan <<<<currently owns and has permissions to write to the wekan database hosted on this same linux machine(NAS)

On the Docker Swarm Linux Machine,
cat /etc/passwd
naswekan:x:1030:65536:
cat /etc/group
naswekan:x:65536:naswekan" <<<This user needs to own the wekanDB hosted on the NAS. Once that is done, I can change ownership of the DB to something else and delete the naswekan user.
I know the naswekan user was an example. I just so happened to decide to use the example and make a group with the same name to keep it simple. Once I can get a user from this linux machine the ability to change ownership of the DB hosted on the other Linux machine, I can run the repair command to repair the DB from this linux machine.(docker swarm machine)
 

Staff online


Top