make is a utility for end users or for programmers/developers?

whired123

Member
Joined
Nov 15, 2021
Messages
61
Reaction score
4
Credits
480
It often happened in the past, that after downloading and decompression a compressed file (containing the files that formed an application), it was necessary (for the end user) to run the make command for the compilation.

But should this command rather than end users not be aimed at programmers/developers?

I think only the latter should use it.
What do you think?


The purpose of the make utility is to determine automatically which pieces of a large program need to be recompiled, and issue the commands to recompile them.

Shouldn't the compilations and re-compilations take care of only the developers?
 


There are some packages available for specific distros. A package might be available for Ubuntu that isn't available Arch, or another package might be available for Fedora that's not available for Slax. SuSE might have a package that's not available for Mint. So on, and so forth...

make is daunting for newbies. Old timers have no problem with it. But it gives a way to compile code for an application that isn't included in a specific distro for one reason or another. Either nobody wants to maintain the package for that distro, or it has license/copyright issues the distro doesn't want to deal with, there can be several reasons why it's not in a vendor repo.

But if you want the package bad enough, download the source ( usually from github ) and compile it yourself.
make usually requires some kind compiler back-end ( not always ) like gcc, rust, go, g++, or something similar.

With snaps, flatpaks, and appImages, it's less and less of a problem, because you can get almost anything as a pre-compiled package now-a-days.

I have seen make used for programs that don't require any compiling. i.e. python, perl, bash, etc...
 
Last edited:
Make is a very handy tool along with configure In the old days 1990's you had no choice with many programs as they just were not compiled for may Distros. So you had to install them using make. As pointed out today there are many alternatives and you can find most programs that are not compiled for a distro's repositories as an appimage, flatpack or snap. And avoid having to make it. But no make was mostly for installing a package from source code, when there was no alternative package in your Distro. It's a very handy tool if needed. Actually most programmers do not use make or configure, But binary packagers do. Those people to give much of their time to maintain binary packages ready to install in a particular distro. Every program you install weather it was big or small had to be made at some point. We benefit today from the work of many packagers who work mostly behind the scenes and do a lot of work to keep us in programs that are easy to install.
 
Last edited:
Make's purpose is to build a project by compiling only those source files that have changed since the previous build, resolving dependencies along the way, according to well defined rules (with a sane set of default rules). As such, it is "for programmers".

However, its dependency resolution features can be used generically - for instance, while make myproject is the typical command to compile the target project, the common targets clean and install illustrate that it can be used for tasks other than compiling. make clean cleans up the project's source directories, removing temp files and such that are leftover from a previous compilation session, but doesn't compile anything - it "makes" the clean state, as defined in the makefile. Similarly, make install "makes" the installed state as defined in the makefile. Hence, make can be used for all sorts of things, many of which are not related to programming - except in as much as writing a good makefile is kind of a programming endeavor in and of itself.

Like most tools and utilities, You need to really -learn- it, even grok it, to make the best use of it other than for simple program building. I wish I could say that I'm there, as I've had the O'reilly book within arms reach for about thirty years, but I just don't use it often enough to really assimilate (or be assimilated by) it.

make.jpg
 
But if you want the package bad enough, download the source ( usually from github ) and compile it yourself.
make usually requires some kind compiler back-end ( not always ) like gcc, rust, go, g++, or something similar.
but will make continue to be used, or will it tend to become obsolete?
 
Actually most programmers do not use make or configure, But binary packagers do.

What are the binary packagers?

Every program you install weather it was big or small had to be made at some point.

????
Had to be made at some point?
????

We benefit today from the work of many packagers who work mostly behind the scenes and do a lot of work to keep us in programs that are easy to install.

????
To keep us in programs?
????
 
A packager is a person who often times donates their time to produce binary packages from source code (made by the programmers) to make a package that can be installed easily on the distro of choice. either .deb or rpm. Or some other format. Some of them work for big companies and get paid, but many work as volunteers and do the work and you see only the finished product. I once did this for a now dead distro. It's a lot of work but has it's own rewards also. The end product is the multitude of programs and packages that are available via your distro's repository. Today some programers are doing appimage, Flatpacks and snaps to avoid this step and make their work available across distros. But some where in the process the program still has to be made usable to the average user.
These pages outline how to do it for .deb and .rpm packages.
Those are just two examples. you can find many more by googling how to make .rpm or .deb packages
Perhaps one of the most visible packagers I know of is Texstar the creator of PCLinuxOS.
Some distros call them package maintainers as they keep the packages up to date and working with new libs, Etc.
 
Last edited:
but will make continue to be used, or will it tend to become obsolete?

I suspect it'll be around for a long time yet. It's one of the oldest binutils.

What are the binary packagers?

This could be a thread all on it's own. But for the most part there are certain ways an application is "packaged".
The two most common ones are .deb and .rpm This gets a little into the family history of Linux and which distro's
spun off which other distros. There are some others I'm less familiar with, portage and pacman come to mind.

.deb packages are used by Debian spin offs, such as Ubuntu, Mint, Parrot and others.
.rpm packages are used by Fedora spin offs, such as CentOS, Redhat, AlmaLinux and others.
There are some utilities that will let you crossover and use .deb on rpm based systems, and .rpm on deb based systems.
but it's been my experience none of them usually work very well.

SuSE isn't really part of the fedora family, but it uses rpms as well.
Arch uses pacman. I think Gentoo still uses portage, but I haven't used that in a very long time.
Hmmm... do we have any Gentoo users on here?

I don't know the exact percentages, but I would guess the vast majority of binary packages have what is called
dependencies. Usually there are library files like libc.so or glibc.so or literally dozens of others. The binary files
usually have to match the version of the library files they were compiled with. Almost all the distro's have different
versions of library files, kernels, build utilities, and compilers. Everytime these are updated, the package usually has to
be updated also. There are teams of dozens of people who do this, almost like a fulltime job, The .deb people
have their teams, and the .rpm people have their teams. They are rarely compatible with each other.

It gets worse because, the versions of kernels and libraries also matter. Just because some package is an rpm doesn't
mean it will run on every version of redhat/fedora/centOS/rocky, and just because some package is a deb doesn't mean
it will run of every version of debian/ubuntu/mint.

Enter snaps, appImage, and flatpak. Again every distro seems to have their favorite.
But the advantage to these is, all the dependency libraries are included in the package. You don't have to
worry about which versions of which libraries you have installed. The disadvantage is, the files are quite a bit larger.
The other disadvantage is... you can have the same versions of the same libraries installed over and over again
dozens of times.

If you wrote your own application that you wanted people to use, you would have to know all of the dependencies that it required. All the versions of all the libraries that it needed to run. ( in some cases, there are dependecies that require other dependencies that require other dependencies, it gets to be quite the nuisance. In the old days before we had vendor packages, there was a lot of trial and error. Usually these were just tar.gz files ( that's another subject ) that you uncompressed, hopefully in the right directory, with the right permissions, running as the right user, with the right start up files in the right locations, it was a mess. Then once you got all that done, you found you needed other packages which required other pacakges and so on. I've literally spent a day installing one package in years past.

Thankfully those days are all but gone.

Make helps with a lot of this. It'll go out and check what versions of what libraries you have and check to see if they're compatible with the source code. It will also let you run make config to add paths for library files and installations directories.
It's not perfect, but it's come a long way.
 
Last edited:
Ok, but let me ask another question.

How do you prevent manual (accidental) editing of compiled (dynamically generated) files?

I'll give an example

Code:
+ tree
├── makefile
└── readme-components
    ├── intro.md
    ├── body.md
    ├── external-contributions.md
    └── conclusion.md

+ cat makefile
render-readme:
    cat readme-components/* > README.md

+ make
cat readme-components/* > README.md

+ tree
├── makefile
├── readme-components
│   ├── intro.md
│   ├── body.md
│   ├── external-contributions.md
│   └── conclusion.md
└── README.md

OK, README.md created.

For those who haven't noticed yet, there's also the file readme-components/external-contributions.md
It's useful if for example updated external contributions arrive (new-external-contributions.md)

Code:
+ mv new-external-contributions.md readme-components/external-contributions.md
+ make
cat readme-components/* > README.md

README.md updated!!!

But a few days later, absentmindedly

Code:
+ nano README.md

I make some changes, save and exit nano.

And suddenly I remember that README.md is a compiled file (dynamically generated by make), and shouldn't have been edited directly.
If anything, you had to intervene on one or more files in the readme-components directory (except for external-contributions.md that I already talked about) and re-run the make command.

How can you avoid these messes?
 
Ok, but let me ask another question.

How do you prevent manual (accidental) editing of compiled (dynamically generated) files?

I'll give an example

Code:
+ tree
├── makefile
└── readme-components
    ├── intro.md
    ├── body.md
    ├── external-contributions.md
    └── conclusion.md

+ cat makefile
render-readme:
    cat readme-components/* > README.md

+ make
cat readme-components/* > README.md

+ tree
├── makefile
├── readme-components
│   ├── intro.md
│   ├── body.md
│   ├── external-contributions.md
│   └── conclusion.md
└── README.md

OK, README.md created.

For those who haven't noticed yet, there's also the file readme-components/external-contributions.md
It's useful if for example updated external contributions arrive (new-external-contributions.md)

Code:
+ mv new-external-contributions.md readme-components/external-contributions.md
+ make
cat readme-components/* > README.md

README.md updated!!!

But a few days later, absentmindedly

Code:
+ nano README.md

I make some changes, save and exit nano.

And suddenly I remember that README.md is a compiled file (dynamically generated by make), and shouldn't have been edited directly.
If anything, you had to intervene on one or more files in the readme-components directory (except for external-contributions.md that I already talked about) and re-run the make command.

How can you avoid these messes?
You can change the permissions so only certain accounts/users can manipulate the file
 
You can change permissions using two modes:
Symbolic mode: this method uses symbols like u, g, o to represent users, groups, and others. Permissions are represented as r, w, x for read write and execute, respectively. You can modify permissions using +, - and =.
Absolute mode: this method represents permissions as 3-digit octal numbers ranging from 0-7.
in symbolic you have 3 basic user representations
u = user/group
g = group
o = other
You can use mathematical operators to Add, Remove, and Assign permissions
+ adds a permission to a file or directory
- Removes the permission
= Sets the permission if not present above or it can override a permission
Suppose, I have a script and I want to make it executable for owner of the file zaira.


Current file permissions are as follows:

Image


Let's split the permissions like this:


1731668248559.png



To add execution rights (x) to owner (u) using symbolic mode, we can use the command below:

chmod u+x mymotd.sh


Output:


Now, we can see that the execution permissions have been added for owner zaira.


Image

Absolute mode uses numbers to represent permissions and mathematical operators to modify them.

The below table shows how we can assign relevant permissions:

PermissionProvide permission
readadd 4
writeadd 2
executeadd 1
Permissions can be revoked using subtraction. The below table shows how you can remove relevant permissions.

PermissionRevoke permission
readsubtract 4
writesubtract 2
executesubtract 1

Example:
Set read (add 4) for user, read (add 4) and execute (add 1) for group, and only execute (add 1) for others.
chmod 451 file-name
This is how we performed the calculation:

1731668433541.png


Note that this is the same as r--r-x--x.
Remove execution rights from other and group.
To remove execution from other and group, subtract 1 from the execute part of last 2 octets.

Image

Assign read, write and execute to user, read and execute to group and only read to others.
This would be the same as rwxr-xr--.


Image



How to Change Ownership using the chown Command​


Next, we will learn how to change the ownership of a file. You can change the ownership of a file or folder using the chown command. In some cases, changing ownership requires sudo permissions.


Syntax of chown:

chown user filename


How to change user ownership with chown​


Let's transfer the ownership from user zaira to user news.


chown news mymotd.sh


Image



Command to change ownership: sudo chown news mymotd.sh


Output:


Image



 
For my purpose, what you described seems a bit excessive to me.

Anyway

Code:
$ chmod -w README.md 

$ make
cat readme-components/* > README.md
cannot create README.md: Permission denied

This also prevents writing from the make command.
Doesn't that seem like an unwanted effect?

Code:
$ chown user2 README.md 
chown: invalid user: 'user2'

So I should even create a second user (user2)?
As already said, seems a bit excessive to me.
 
How do you prevent manual (accidental) editing of compiled (dynamically generated) files?
Okay, I'm going to share this hack with you. Not something I should be posting here, but still:
You can use the following to prevent accidental modifications:
Don't edit those files + due dilligence = safe
Or, ya know, there's setting up local VCS, manual backup, eg cp Readme.md Readme.md.bkp_2024-11-16.

For my purpose, what you described seems a bit excessive to me.

So I should even create a second user (user2)?
As already said, seems a bit excessive to me.

Nothing wrong with the user "devel" (if not present, see specs) to build stuff (ie running make), although TBH it's a cannon to kill a mosquito for your use case and the "devel" use case is more aimed at an isolated environment for building your own software and testing it -- which is kinda obsoleted by Docker now. But hey, whatever works for you. That's why Linux is different from other OSes; "They're more guidelines than actual rules." (film ref).
 
Nothing wrong with the user "devel" (if not present, see specs) to build stuff (ie running make), although TBH it's a cannon to kill a mosquito for your use case

Exactly, that seems like an exaggeration to me.

and the "devel" use case is more aimed at an isolated environment for building your own software and testing it -- which is kinda obsoleted by Docker now.

So should I use a command like the following?

Code:
$ docker run --rm -v ./README.md:README_MOUNT_POINT IMAGE make ...

Yes or no?

If yes,
  • what do you recommend I specify as README_MOUNT_POINT?
  • what do you recommend I use as IMAGE?

But hey, whatever works for you.

I don't understand you, can you explain yourself better?
 
Last edited:
So should I use a command like the following?
Code:
$ docker run --rm -v ./README.md:README_MOUNT_POINT IMAGE make ...
No. Definitely not. You need to read the docs: https://docs.docker.com/get-started/
What you're attempting to do would result in your file on the host getting modified within the container. This defeats the very purpose of it.

what do you recommend I specify as README_MOUNT_POINT?
I don't, see above... But for future reference for something that should be modified, "keep it canonical" is about the best advice I can give.
what do you recommend I use as IMAGE?
That's why you need to read the docs. You need an image, which you can create or download one with all the tools preinstalled (from a trusted source -- obviously).

But hey, whatever works for you.
I don't understand you, can you explain yourself better?
Literally that. Whatever method you prefer. If you like Docker, use it. If you'd prefer a VM, use that. If you'd be happy running a developer user for an isolated development environment, use that. In my case, Docker's not for me (if it had been around when I'd been in my early 20's, yeah, that would've been bloody handy) because after giving it a fair test run, it was just overkill for my current needs (I no longer code much and each project I do update is being migrated to Python if it can be, because Python runs on anything and this de-necessitates having multiple builds per arch/platform). But different needs.
 
But should this command rather than end users not be aimed at programmers/developers?
No, the intention should all ways be, that all following questions are answered as clear, as detailed as possible :
1. Are all dependencies documented
2. The version of used tools (also the sources of the tools) must be given
3. Refer to https://en.wikipedia.org/wiki/Reproducible_builds, all recommendations must be fulfilled, that e.g.: after 10 years someone could rebuild the sources with the same binary result.
4. ...

Therefore the question should be, what we can learn from industrial standard like ISO9001, DO178C ... !!!
 
I don't, see above... But for future reference for something that should be modified, "keep it canonical" is about the best advice I can give.

What do you mean by "keep it canonical"?


That's why you need to read the docs. You need an image, which you can create or download one with all the tools preinstalled (from a trusted source -- obviously).
The documentation offers me several links


and much more.

Can I read the entire manual?
Where can I find the time?

You should indicate the ones most relevant to the topic we are discussing.
Where can I start?

In any case, as I think I have already said, a concrete example would already save a lot, a lot of time.

 
What do you mean by "keep it canonical"?
Keep paths and volumes consistent and provide absolute paths:
If readme.md is in /home/me/, and you're in ~ (/home/me/) don't mount ./readme, rather mount /home/me/readme, use the same paths in you docker image, etc. So be consistent, be absolute, be safe.

Can I read the entire manual?
Yes, you can, and generally should eventually have read the docs by the time you're comfortable. Nobody is saying read the whole thing at once. I do get where you are coming from about wanting to just dive in. I learn by doing, myself, but I still read docs retrospectively because there's invariably a best practice or great piece of functionality I missed.
If you want to fast-track, watch a youtube video. Initiative can take you to heights that you'd never have imagined. I don't know why people don't just search for tutorials, there's a tutorial for everything out there.

Where can I find the time?
If a single mom can find time to work 2 jobs and raise her kids, I'm sure you'll be able to do it... Jokes aside, try a video tutorial.
And remember: you don't need to use Docker. If the learning curve it too steep, I did make three other suggestions: a separate user, a local VCS, or just basic backup copies.

You should indicate the ones most relevant to the topic we are discussing.
Knowing the term "workshop" in IT/business is helpful, I guess. A workshop is where you learn the basic A-Z. So workshop would be a start.

In any case, as I think I have already said, a concrete example would already save a lot, a lot of time.
In this instance it's impossible to give you an example. It would be like if you knew nothing about databases or Python, me saying here's how to manage a MySQL database in Python3 and providing you an example to do one specific thing like query all records born in 1985.

In short, if learning a new technology is daunting, find solutions with what you're already familiar with. But there will always be things to learn. I learn new stuff daily. And I imagine I will never stop learning, unless I'm comatose (well, not really, I have a "living will" so when it's my time, it's my time, lol). If learning is a PITA, then you shouldn't be looking to create anything or question the world around you. You should sit staring at Netflix all day until your brain turns to jelly or the trumpets sound.

PS: If my rhetoric comes off as brash/terse/etc., I apologize; it's just my sense of humor and writing style.
 
Nothing wrong with the user "devel" (if not present, see specs) to build stuff (ie running make),
Anyway, in addition to docker, as an alternative, could I also use the following command?

Code:
$ sudo -u devel make ...
sudo: unknown user devel
sudo: error initializing audit plugin sudoers_audit

Obviously I have to create the user devel.

On the other hand

although TBH it's a cannon to kill a mosquito for your use case

as I already said, I agree with this too.
 

Staff online

Members online


Latest posts

Top