Blue Flower

It' seems to be it's "apparently" widely accepted in the IT industry that the LPIC-1 and LPIC-2 Linux Certifications are not as well regarded and valuable as, for example, a RHCSA Linux Certification (Red Hat Certified System Administrator), yet the LIPC certifications are reviewed by a group of Linux experts whom are not bound to any particular distro or company, meaning that if you certified as an LPIC, your knowledge (the whole point of getting certified, to prove your knowledge) can be in theory exported to any other distro running a Linux kernel... that includes Red Hat. So, start small and solid and let's go and be LPIC certified! :)

Here we have the notes that I took while preparing for the Linux + and LPIC certifications:

The theory and history of the Linux Kernel

The /dev folder contains a list of files that are pointing to actual hardware devices of the system

The procfs (/proc folder) contains processes information so that all other programs can refer to, etc; with time the /proc folder started accumulating system information which is not process information, therefore the /proc folder became messy and confusing

  • From kernel 2.5; the folder sysfs (/sys folder) was introduced to contain information specifically about the system (devices, cpu, etc), which before was stored in /proc
  • From kernel 2.6; the udev system was introduced, which populates the /dev folder; the dbus was also introduced which send signal to desktop application from the system /dev folder, so that devices that are plugged-in are shown and interact with the desktop

In Linux, just like in Unix, everything is treated as a file 

#dmesg   //** List the event that occurs when the system loads or something is plugged in
#lsmod   //** List the installed modules (drives) running in the system
#lsusb   //** shows all the USB devices currently connected on the system
#usb-devices  //** gives you a lot more information than lsusb
#lspic //** shows that pci controllers that exist on the system, the mobo primarely
#sudo modprob modname //** insert a module on the kernel (loads a driver)

Linux booting process and run levels

After the POST (Power On Self Test) the system runs GRUB (Grand Unified Bootloader) from the MBR (Master Boot Record, the first 512KB of the drive). Back on the day, I remember very well, it used to be LiLo (Linux Loader) the default loader, which was really clunky. Whichever loader you use, it always launches the initrd (initial ram disk) that loads a little kernel into ram to detect all hardware and load only the modules needed for the system to run efficiently; it is the initrd tiny kernel the one that mounts the hard drive. Once initrd loads, it then load the beginning of the Linux kernel called "init" (which has a process number of 1, being the first process that runs), it is then init the one who initializes the Linux kernel

Modern Linux distributions have these 3 different types of init programs (there is a fourth one called "BSD init" but not many distro use this one, because is not very flexible)

   To support this site click and explore the advert - Thank you!

  • SysVinit; around for a long time; it uses numerical order to start the systems one at a time; it can run at run level 1 (single user mode), run level 3 (console mode), run level 5 (GUI mode) etc
  • Upstart; developed by Ubuntu, start the system by working out dependencies, therefore it loads faster with this mode because different programs can start at the same time
  • Systemd; the new guy, use by Ubuntu and RedHat, but not compatible with other systems, and it may need "systemctl enable____" or "systemctl start _____" to load the systems; Systemd uses binary code, and not just scripts like "SysVinit" and "Upstart"

 Here is a table comparison of the run levels usage between different systems; just to note that run level 4 doesn't exist, and that Debian/Ubuntu have put all-the-run-levels on level 2

Run Level (Boot target) CentOS / Suse Debian/ Ubuntu Description
0 (power off) Halt Halt The system is completed stop
1 (rescue) Single User Mode Single User Mode Only one user can logon, to change the root passwd for example
2 Mutli User, no network Full, multiuser, GUI is available if installed To troubleshoot the system ; for Debian and Ubuntu everything is at run level 2
3 (multi-user non-gui) Multi User, with network  n/a On CentOS, level 3 is console with net access
4 n/a n/a Not use at all
5 (GUI) Multi user with GUI n/a With the fully GUI loaded
6 (reboot) Reboot Reboot

When the system is rebooting

 To change run level use "telinit" or "init" following by the level you need, though note that "init" is not the right command to use, stick to "telinit" instead

#su runlevel    //**shows at what run level you're running
3 5             //**this means that before you were on runlevel 3 but now you are on 5
#su telinit 3   //**switches to run level 3

The default running mode can be determine by examining the file /etc/inittab; on a distro running systemd, you need to examine the file /etc/systemd/system instead; use the command cat to examine it or the command vi to edit it (on the screenshot below notice that we are using an operating system with systemd, therefore the command "systemctl" must be use to change the boot target):


To explore the different target that the system has, explore these two locations:

#cat etc/systemd/system/*.target        //**shows available target levels
#cat usr/lib/systemd/system/*.target    //**shows available target levels

#systemctl isolate    //**the "isolate" string is used to switch target
#systemctl get-default                  //**shows the current running level
#systemctl set-default //**set running level 5 as default

#telinit 3  //**exactly the same as the 'systemctl isolate' command

To messup your colleague, and enhance his/her troubleshooting techniques, you can a system to "reboot" as the default running level, lol. This table maps the different runlevels with their corresponding target string:

Run level Boot target
0 poweroff
1 rescue
3 multi-user
5 graphical
6 reboot


How to shutdown your Linux system

Oh well, just as a start, please don't pull the plug! To shutdown your system, switch to one of these run levels:

#telinit 0    //**the same as:   #systemctl isolate
#telinit 6    //**the same as:   #systemctl isolate

If your system is frozen, you may try "reboot -f now" (not recommended, unless system is frozen); by using the -f flag you force the reboot, obviously all data and integrity is lost and compromise, and there is not warranty at all the system may boot up normally again. You can also use these different variations to shutdown your system:

shutdown -r now     //**inmediately reboot now
shutdown -h +3      //**halt the system (h) in 2 minutes time  
shutdown -P 17:30   //**it powers off (P) the system at the specified time

shutdown -P +3 "System is shutting down"  //**displays a message in console

To display a message to all your buddies who are connected to the console, use the "wall" command with the pipe (|) operator

#echo "This is a message that says Hello World!" | wall


The different partitions in the Linux OS

/var  ; keep it if possible on a separate hard drive or partition, with fast access, as this is folder is in constant use and it tends to be filled quickly with logs, etc

/swap ;this is a partition that acts as the page file in windows

LVM (Logical Volume Manager) adds all drives together on a pool, where you can expand it by adding drives as you go along, giving your in that way lots of flexibility. LVM provides no redundancy at all, therefore underneath the LVM some kind of RAID1 (mirror) or RAID5 (parity) must be established if you want to provide resilience to your system

Remember that on a MBR drive only 4 partitions are possible, if you need more than 4 partitions you need to start using extended partitions

 Grub vs Grub2

They both stand for "GRand Unified Boot system", and both application work by inserting themselves on the MBR of the drive, then loading from there the boot sector of the Linux OS 

Grub Grub2

To install it: grub-install

Located at: /boot/grub/menu.lst

To modify just edit the *.lst file

To install it: grub2-install

Located at: /etc/grub2/grub.cfg

To modify you need to use "grub2-mkconfig -o" (for output) to edit the *.cfg file; the grub.cfg file pics the configuration from the folders "etc/default/grub" and from "/etc/grub.d/"

 Enterprise Linux 6 and before use only grub

Debian/Ubuntu has been running grub2 since 2009, and furthermore they are backward compatible

Enterprise 7+ and beyond uses grub2, and on these distros, notice that the "grub.cfg" file is located at /boot/grub2 (instead of the /etc/grub2 folder), making it no script backward compatible

In Debian/Ubuntu you need to type "grub-mkconfig -o" , while on Enterprise Linux you need to type "grub2-mkconfig -o". It is very rare to find a Debian/Ubuntu OS running legacy grub, as since 2009 where grub2 was adopted by these distros it's a long time, but potentially you can find some Enterprise Linux 6 in production, and those guys will run legacy grub

To make changes to the grub2 configuration of a system, do as follows:

  1. Edit the file /etc/default/grub, and make the adjustment that you need (for example, extend the default time out)
  2. To activate the modified configuration, you need to re-generate the .cfg file by running: "grub-mkconfig -o /boot/grub/grub.cfg"

The command "ldconfig" read the file /etc/ and created the, looks into the folder "" and populate the cache with all the configuration files in that folder and the location of the share libraries

See this diagram of the file system, of the Linux share libraries:

  • etc (folder)
    •  //**this file tells the system where to look for library files. This file normally points to the folder underneath ( where different program can use the share files
    •  //**this is a binary file -not editable at all- that is created by the ldconfig command, this file is used for programs to load the share files quickly
    • (folder)
      • libc.conf
      • linux.conf
      • vmware.conf

The ldd application is used to determine which share libraries a determine program needs in order to function, for example, typing "ldd /bin/ls" outputs how many share libraries the 'ls' program is using; ldd is mostly use when you're troubleshooting a program, etc

Share Libraries work the same across different distros

ls | grep  //**displays all files that contain the string

If you add a modification to any of the conf files (or add a new conf file) under the "" folder, you need to recreate the file by running: sudo.ldconfig, this is so that the different programs know and are aware of the share libraries available in your system

You can use the "export" command to add an specific share library to your local path


APT  (for Debian/Ununtu distributions)

To find out what online repositories your system is using, visit the file /etc/apt/sources-list (some Ubuntu distros also includes the folder etc/apt/source-list.d to hold information about the repos. Use the following commands to interact with apt

To interact with the repos, you'd only use these 3 commands:

#apt-get    //**this command contains lots of switches for customisation, like -update, -upgrade, -dist-upgrade, -install, etc;
The -remove switch will remove a package while the -purge switch not just remove the package too but also all its configuration files
#apt-cache   //**used mainly in Ubuntu, is the same thing as the aptitude program
#aptitude    //**used mainly in Debian systems

The first thing to do is to run "sudo apt-get update", which will sync your repo with the online repos to ensure you have the latest packages entries in your cache, basically update your cache

  • apt-cache search apache2  ;searches the cache for applications that contains the word apache2
  • apt-cache show apache2  ;shows information about the apache2 program
  • sudo apt-get install apache2  ;will install apache2 program and any dependencies that it may have, as well as suggested packages (you'll need the root passwd)
  • sudo aptitude remove apache2 ,it will remove the app, we could have used the apt-get tool if we wanted to
  • sudo dkpg-reconfigure tzdata  ;reconfigure existing installation of packages by running the start wizard if applicable


YUM (Yellow-dog Updater Modifier) and RPMs (Redhat Package Manager) for Enterprise Linux

YUM is a smart application that check for dependencies whenever you install a RMP package, checking the Internet and your local cache before actually installing anything, always keeping it updated

yum search firefox //** it searches for the firefox package on your system
yum info firefox
yum install firefox
yum remove firefox
yum provides /etc/any-given-file.conf   //** if your spedify a file, the 'provides' siwtch of yum will tell you which rpm install it

yum update *    //** with the * it means it will update all packages in your system

yum -y update  //** before installing any packages, update your system with this command

 To configure YUM edit these two files:

  • /etc/yum.conf
  • /etc/yum.repos.d/


The Command Line - introduction

Here there are some tricks that you can use while operating a Linux machine: 

Operators : and $$

  • ls ; ls  ;the semicolon allows to enter 2 commands independently
  • ls $$ ls  ;the double ampersand executes the 2nd command only if the 1st one succeed
pwd //**displays your current directory

history //**have a look at your history
history -c  //**clears the history

cat .bash_history  //**shows history of the bash log, which is deleted at log off

ls ;list of the files on current directory

  • ls -l * ;list of files and its properties

cat ; short for concatenate, use to display the file on the screen

dd ;copy files at the block level, it uses the structure input file > output file > block size (4k normally)

  • dd  if=/dev/sda out=/dev/sdb ;it copies all of the blocks from the drive sda (input file, if)  to the drive sdb (output file, of), byte by byte

tar ;copy files at the file level, it stands for tape archive, always use the switches -cvf (c f or create, v for verbose and f for file, always put f last)

  • tar -cvf file1.tar /home/user1/stuff ;that includes all the contents of the "stuff" folder into the file1.tar
  • tar -xvf file1.tar ;extract the contents of the file1.tar to the current directory you are working on, notice the -x switch that strand for extract

gzip ;older compression command, for better compression use bzip2, they work exactly the same

  • gzip file1.txt ;compresses file1.txt by deleting that file and leaving it file1.txt.gz
  • gunzip file1.txt.gz ;uncompressed the file

bzip2 ;the newer compression command

  • bzip2 file1.txt ;it created the compressed file with the extension bz2
  • bunzip2 file1.txt.bz2 ;uncompress the file

Both gzip and bzip2 can only compress files, that's why you have to use tar to compress folders into a file, and then gzip/bzip2 that file

With gzip do as follows:

  • tar -zcvf file1.tar.gz /home/user1 ;use the -c (compress) will create the specified tar file that will contain all stuff into the user1 folder
  • tar -zxvf file1.tar.gz ;this will extract (use the -x for extraction) the contents of file1.tar.gz (could also have the extension tgz) into your current folder

With bzip2, do as follows, it uses the -j to say you are using the bzip2 command

  • tar -jcvf file1.tar.bz2 /home/user1 ;to compress
  • tar -jxvf file1.tar.bz2 ;to extract

cpio  ;(copy in copy out) an older version of the tar command, rarely use nowadays, because is not as flexible as tar, you actually have to pipe the data into cpio in order to user it; it copy all the files and put them on a kind of zip file with the extension .cpio, then that .cipo file is package with a description and dependencies of the files to create the .rmp file. The way to extract the contents of a .rpm package is therefore as follows:

  • ls  | cpio -ov > file.cpio ;gathers the data from ls and pipe into the cpio program, who outputs it (the o switch) to a file
  • cpio -idv < file.cpio ;does the opposite, input (i) the data from the file and creates any relevant directories (d) 
  • rmp2cpio package.rpm ; this will extract all the files that are inside the rpm package, one of those files would be a .cpio file
  • cpio -idv packagefile.cpio ;this will extract the original file structure
  • cpio is useful mostly when you use the find command to search for files that you want and then pipe them into a single .cpio file

sed ;allows you to edit a file on the fly, without actually interacting with it; sed most use string is "s" for substitute where you can replace a given string with a different one, remember to use the "g" for global string at the end, so that the replacement occurs recursively in the whole file

  • The way it works is like this: sed 's/string1/string2/g' file1.txt ;this will replace string1 with string2 inside file1.txt
  • You can also replace more than 1 string at the same time: sed 's/string1/string2/g' file1.txt | sed 's/string3/string4/g'
  • You can also use regular expressions, on this example it will look for a capital/lower case S on string1: sed 's/[Ss]tring1/string2/g' file1.txt
  • In this example ^ symbol means that the substitution will occur only if string1 is at the beginning of the line: sed 's/^String1/String2/g' file1.txt

head ;give you the first 10 lines only by default of a file, handy to see the first entries on a log file for example. To modify the number of lines that you want to see use the -n switch #head -n 15 file1.txt

tail ;same as "head", but only gives you the last 10 lines #tail file1.txt

split ;create several files from a single one, depending on how many lines (-) you want on each file #split -l 5 file1.txt file_ ;this will create file_aa, file_ab, file_ac, etc, where each file contains only 5 lines in sequential order from the parent file

nl ;number of lines, very useful command that helps you create a file of a given file listing the number of lines this file had, very handy when a script it has an error in line number 273 and you had no idea what line is that

  • nl -b a context.xml > context.lines ;this creates a file called 'context.lines' that contains all lines (a switch) of all the body (-b switch) of the context.xml file

paste and join ;they both work by merging data together from 2 x files, but they do it on different way paste/join file1.txt file2.tx > file3.txt

  • paste ; copy the lines side by side into a single file
  • join ;bind the lines depending on the fields identifier, so you can bind line 1 with line 3 as long as you identify these two lines with the same number (let's say 2)

expand ;convert a tabbed-file into a spaced-file, so the lines won't have any tabs but spaces instead; expand file1.txt > expanded.txt or you can use the other way around unexpand -a unexpanded.txt


File Management - commands

touch ;creates an empty file

mkdir ;creates an empty folder

cp ;copy files, use the combination copy -R to copy all contents inside the folder

mv ;move files and folders, you can use this command to rename a file mv file1.txt file2.txt

rmdir; removes folders only if they are empty

grep ;search for string inside a file

rm ;removes files and folder, the combination rm -rf is very dangerous as it deletes everything without asking; rm *.txt deletes all these extensions while rm * deletes all files

find ;search your system for files, and you better use options otherwise it will list all files that are on your system! See these examples:

  • fin . ;return files on current directory, no filter being applied 
  • find /home/user1 -size +50M   ;find files that are bigger than 50MB in size
  • find . -mtiime +1 ;show files that have been modified more than one day ago (yesterday)
  • find . -mtime -2 ;show files that have been modified less than 2 days ago
  • find . -size +50M -mtime +1 ;show files bigger than 50MB that were modified yesterday

How to redirect files to standard outputs pipes

;redirects the to STDIN, standard input which is most of the time the keyboard. You can actually create a variable using the read command to store input that you entered on the keyboard, for example:

  • read VAR ;this command will stop at the cursor and wait for you to type anything, and once you type it the string will be stored in the variable VAR
  • echo $VAR ;using the echo command, you will be able to recall the string you stored into the VAR variable
  • read < VAR file1.txt ;this command uses the < operator, which configures the input to be "file1.txt", which will be stored on the VAR variable instead

>  ;redirects to STDOUT, standard output, normally the screen, for example:

  • ls > test-out-pipe ;this redirects the output by creating a file called "test-out-pipe", that will contain the result of the command ls

2> - redirects to STDERR

  • ls myfolderA 2> error  ;this command will try to list the contents of "myfolderA", but because it doesn't exist will give an error through the STDERR pipe, which in this case will be captured by the directected parameter "2>" and put into a file called "error"
  • ls > out 2> error ;this example create the "out" file from the STDOUT pipe and the "error" file from the STDERR pipe

With the concept of standard input/output, we can manipulate data so that the output of an application becomes the input of another, for example

  1. ls > file1.txt ;will create a file called file1.txt with the listing result from ls
  2. grep is file1.txt ;will search for the string "is" on the file1.txt and display the result on the screen
  3. You can do this command, with the pipe operator, to get the output of ls and piped into the input of grep, cool ah? ls | grep is

xargs ;is a program that listens in STDIN and executes the output to a command, for example

  • ls | xargs rm -rf ;this get the STDOUT from ls and pipe it to xargs, which executes the rm command, just like if you typed: rm -rf

tee ;similar command to xgars, get the input of a command and pipe it into another, plus shows in the screen the output

  • echo "print this" | tee file.txt ;the tee command will get the input from the each and put it in the screen, creating in addition the file.txt

&1 ;this operator redirects the input into the output, like a look, for example:

  • ls myfolderA > results.txt 2>&1 ;the error generated by the lack of existence of this folder will be take from the STDERR and put it in STDIN by &1


Backgrounds commands

& ;by putting the ampersand at the end of a command it sends the job to the background but they are only there in the terminal window that you are working on, the minute you close that terminal all background commands are stopped; to stop jobs just use the popular Crtl+Z

  • sleep 60 & ;
  • jobs ;allows you to see what jobs are running in the background
  • fg %2 ;allows you to bring to the foreground (fg) the job id number 2; when you see the + next to a job number that indicates that this is the most recent command you sent to the foreground, meaning that if you just type "fg" it will bring back the one with the plus

nohup ;it sends jobs to the background too, but they will stay running in the system even if you close the terminal window

  • nohub sleep 60 & ;creates a background job just like before, but it will stay on even if you close the terminal window
  • ps aux | grep sleep ;allow you to see the background jobs on a different window

 disown ;detach from a terminal window a background job



In Linux the listing of permissions are the owner of the file,  the group the owner belongs to and the group everyone

item    user            


- r w            
  4 2 1 4 2 1 4 2 1
  • chmod ugo+w file1.txt ;u(users) g(grouip) o(other), set every to write for the given file
  • chmod u+rw,g+rw,o-rwx ;set user and the group to be rw but remove the rw access from the other

Octal Notation, most commong values:

  • 7 = rwx
  • 6 = rw
  • 5 = rx
  • 4 = r
















Environment variables

#env                                   //**displays the PATH configure on the system
#export PATH=/home/manuel/sbin:$PATH   //**append that folder to the PATH variables,
but only for the session that you are on, the setting is delete it when you exit the terminal

#unset PATH  //**remove all the path!!! to restore the paths just re-open the terminal













To install SSH on an Ubuntu server:

sudo apt-get install openssh-server

 If you get the error "E:  Unable to locate package openssh-server", run these two commands:

sudo apt-get upgrade

sudo-apt-get update


Print Friendly, PDF & Email