Saturday, October 19, 2013

apache-status-code

Apache Status Code

HTTP, Hypertext Transfer Protocol, is the method by which clients (i.e. you) and servers communicate. When someone clicks a link, types in a URL or submits out a form, their browser sends a request to a server for information. It might be asking for a page, or sending data, but either way, that is called an HTTP Request. When a server receives that request, it sends back an HTTP Response, with information for the client. Usually, this is invisible, though I'm sure you've seen one of the very common Response codes - 404, indicating a page was not found. There are a fair few more status codes sent by servers, and the following is a list of the current ones in HTTP 1.1, along with an explanation of their meanings.

A more technical breakdown of HTTP 1.1 status codes and their meanings is available at http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. There are several versions of HTTP, but currently HTTP 1.1 is the most widely used.

Informational

    100 - Continue
    A status code of 100 indicates that (usually the first) part of a request has been received without any problems, and that the rest of the request should now be sent.
    101 - Switching Protocols
    HTTP 1.1 is just one type of protocol for transferring data on the web, and a status code of 101 indicates that the server is changing to the protocol it defines in the "Upgrade" header it returns to the client. For example, when requesting a page, a browser might receive a statis code of 101, followed by an "Upgrade" header showing that the server is changing to a different version of HTTP.

Successful

    200 - OK
    The 200 status code is by far the most common returned. It means, simply, that the request was received and understood and is being processed.
    201 - Created
    A 201 status code indicates that a request was successful and as a result, a resource has been created (for example a new page).
    202 - Accepted
    The status code 202 indicates that server has received and understood the request, and that it has been accepted for processing, although it may not be processed immediately.
    203 - Non-Authoritative Information
    A 203 status code means that the request was received and understood, and that information sent back about the response is from a third party, rather than the original server. This is virtually identical in meaning to a 200 status code.
    204 - No Content
    The 204 status code means that the request was received and understood, but that there is no need to send any data back.
    205 - Reset Content
    The 205 status code is a request from the server to the client to reset the document from which the original request was sent. For example, if a user fills out a form, and submits it, a status code of 205 means the server is asking the browser to clear the form.
    206 - Partial Content
    A status code of 206 is a response to a request for part of a document. This is used by advanced caching tools, when a user agent requests only a small part of a page, and just that section is returned.

Redirection

    300 - Multiple Choices
    The 300 status code indicates that a resource has moved. The response will also include a list of locations from which the user agent can select the most appropriate.
    301 - Moved Permanently
    A status code of 301 tells a client that the resource they asked for has permanently moved to a new location. The response should also include this location. It tells the client to use the new URL the next time it wants to fetch the same resource.
    302 - Found
    A status code of 302 tells a client that the resource they asked for has temporarily moved to a new location. The response should also include this location. It tells the client that it should carry on using the same URL to access this resource.
    303 - See Other
    A 303 status code indicates that the response to the request can be found at the specified URL, and should be retrieved from there. It does not mean that something has moved - it is simply specifying the address at which the response to the request can be found.
    304 - Not Modified
    The 304 status code is sent in response to a request (for a document) that asked for the document only if it was newer than the one the client already had. Normally, when a document is cached, the date it was cached is stored. The next time the document is viewed, the client asks the server if the document has changed. If not, the client just reloads the document from the cache.
    305 - Use Proxy
    A 305 status code tells the client that the requested resource has to be reached through a proxy, which will be specified in the response.
    307 - Temporary Redirect
    307 is the status code that is sent when a document is temporarily available at a different URL, which is also returned. There is very little difference between a 302 status code and a 307 status code. 307 was created as another, less ambiguous, version of the 302 status code.

Client Error

    400 - Bad Request
    A status code of 400 indicates that the server did not understand the request due to bad syntax.
    401 - Unauthorized
    A 401 status code indicates that before a resource can be accessed, the client must be authorised by the server.
    402 - Payment Required
    The 402 status code is not currently in use, being listed as "reserved for future use".
    403 - Forbidden
    A 403 status code indicates that the client cannot access the requested resource. That might mean that the wrong username and password were sent in the request, or that the permissions on the server do not allow what was being asked.
    404 - Not Found
    The best known of them all, the 404 status code indicates that the requested resource was not found at the URL given, and the server has no idea how long for.

    405 - Method Not Allowed
    A 405 status code is returned when the client has tried to use a request method that the server does not allow. Request methods that are allowed should be sent with the response (common request methods are POST and GET).
    406 - Not Acceptable
    The 406 status code means that, although the server understood and processed the request, the response is of a form the client cannot understand. A client sends, as part of a request, headers indicating what types of data it can use, and a 406 error is returned when the response is of a type not i that list.
    407 - Proxy Authentication Required
    The 407 status code is very similar to the 401 status code, and means that the client must be authorised by the proxy before the request can proceed.
    408 - Request Timeout
    A 408 status code means that the client did not produce a request quickly enough. A server is set to only wait a certain amount of time for responses from clients, and a 408 status code indicates that time has passed.
    409 - Conflict
    A 409 status code indicates that the server was unable to complete the request, often because a file would need to be editted, created or deleted, and that file cannot be editted, created or deleted.
    410 - Gone
    A 410 status code is the 404's lesser known cousin. It indicates that a resource has permanently gone (a 404 status code gives no indication if a resource has gine permanently or temporarily), and no new address is known for it.
    411 - Length Required
    The 411 status code occurs when a server refuses to process a request because a content length was not specified.
    412 - Precondition Failed
    A 412 status code indicates that one of the conditions the request was made under has failed.
    413 - Request Entity Too Large
    The 413 status code indicates that the request was larger than the server is able to handle, either due to physical constraints or to settings. Usually, this occurs when a file is sent using the POST method from a form, and the file is larger than the maximum size allowed in the server settings.
    414 - Request-URI Too Long
    The 414 status code indicates the the URL requested by the client was longer than it can process.
    415 - Unsupported Media Type
    A 415 status code is returned by a server to indicate that part of the request was in an unsupported format.
    416 - Requested Range Not Satisfiable
    A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long.
    417 - Expectation Failed
    The 417 status code means that the server was unable to properly complete the request. One of the headers sent to the server, the "Expect" header, indicated an expectation the server could not meet.

Server Error

    500 - Internal Server Error
    A 500 status code (all too often seen by Perl programmers) indicates that the server encountered something it didn't expect and was unable to complete the request.
    501 - Not Implemented
    The 501 status code indicates that the server does not support all that is needed for the request to be completed.
    502 - Bad Gateway
    A 502 status code indicates that a server, while acting as a proxy, received a response from a server further upstream that it judged invalid.
    503 - Service Unavailable
    A 503 status code is most often seen on extremely busy servers, and it indicates that the server was unable to complete the request due to a server overload.
    504 - Gateway Timeout
    A 504 status code is returned when a server acting as a proxy has waited too long for a response from a server further upstream.
    505 - HTTP Version Not Supported
    A 505 status code is returned when the HTTP version indicated in the request is no supported. The response should indicate which HTTP versions are supported.


Thanks for LearningTOP

Friday, October 18, 2013

How to setup Password less remote login/ssh?

Create public and private keys in local system.
Copy public key file data to remote system

i) Use "ssh-keygen -t dsa or rsa" at local system for creating public and private keys

linuxshelf@tutorial # ssh-keygen -t dsa or rsa

ii) copy /home/linuxshelf/.ssh/id_dsa.pub to remote_server by name /home/admin/authorized_keys

iii) Change permissions of  /home/admin/.ssh/authorized_keys file at remote_server 
"chmod 0600 ~/.ssh/authorized_keys"
 

Now try to login from local system to remote_server "ssh admin@remote_server"

Wednesday, April 17, 2013

Setting Up an NFS Server

Setting Up an NFS Server

1.Introduction to Server Setup

It is assumed that you will be setting up both a server and a client. If you are just setting up a client to work off of somebody else's server (say in your department), you can skip to “Settingup an NFS Client”. However, every client that is set up requires modifications on the server to authorize that client (unless the server setup is done in a very insecure way), so even if you are not setting up a server you may wish to read this section to get an idea what kinds of authorization problems to look out for.
Setting up the server will be done in two steps: Setting up the configuration files for NFS, and then starting the NFS services.

2.Setting up the Configuration Files

There are three main configuration files you will need to edit to set up an NFS server: /etc/exports, /etc/hosts.allow, and /etc/hosts.deny . Strictly speaking, you only need to edit /etc/exports to get NFS to work, but you would be left with an extremely insecure setup. You may also need to edit your startup scripts;

2.1. /etc/exports

This file contains a list of entries; each entry indicates a volume that is shared and how it is shared. Check the man pages (man exports) for a complete description of all the setup options for the file, although the description here will probably satisfy most people's needs.
An entry in /etc/exports will typically look like this:
directory machine1(option11,option12)
machine2(option21,option22)
where
directory
the directory that you want to share. It may be an entire volume though it need not be. If you share a directory, then all directories under it within the same file system will be shared as well.
machine1 and machine2
client machines that will have access to the directory. The machines may be listed by their DNS address or their IP address (e.g., machine.company.com or 192.168.0.8 ). Using IP addresses is more reliable and more secure. If you need to use DNS addresses, and they do not seem to be resolving to the right machine.
optionxx
the option listing for each machine will describe what kind of access that machine will have. Important options are:
  • ro: The directory is shared read only; the client machine will not be able to write it. This is the default.
  • rw: The client machine will have read and write access to the directory.
  • no_root_squash: By default, any file request made by user root on the client machine is treated as if it is made by user nobody on the server. (Exactly which UID the request is mapped to depends on the UID of user "nobody" on the server, not the client.) If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server. This can have serious security implications, although it may be necessary if you want to perform any administrative work on the client machine that involves the exported directories. You should not specify this option without a good reason.
  • no_subtree_check: If only part of a volume is exported, a routine called subtree checking verifies that a file that is requested from the client is in the appropriate part of the volume. If the entire volume is exported, disabling this check will speed up transfers.
  • sync: By default, all but the most recent version (version 1.11) of the exportfs command will use async behavior, telling a client machine that a file write is complete - that is, has been written to stable storage - when NFS has finished handing the write over to the filesystem. This behavior may cause data corruption if the server reboots, and the sync option prevents this.
Suppose we have two client machines, slave1 and slave2, that have IP addresses 192.168.0.1 and 192.168.0.2, respectively. We wish to share our software binaries and home directories with these machines. A typical setup for /etc/exports might look like this:
/usr/local   192.168.0.1(ro) 192.168.0.2(ro)
/home        192.168.0.1(rw) 192.168.0.2(rw)
Here we are sharing /usr/local read-only to slave1 and slave2, because it probably contains our software and there may not be benefits to allowing slave1 and slave2 to write to it that outweigh security concerns. On the other hand, home directories need to be exported read-write if users are to save their work on them.
If you have a large installation, you may find that you have a bunch of computers all on the same local network that require access to your server. There are a few ways of simplifying references to large numbers of machines. First, you can give access to a range of machines at once by specifying a network and a netmask. For example, if you wanted to allow access to all the machines with IP addresses between 192.168.0.0 and 192.168.0.255 then you could have the entries:
/usr/local 192.168.0.0/255.255.255.0(ro)
/home      192.168.0.0/255.255.255.0(rw)
you may also wish to look at the man pages for init and hosts.allow.
Second, you can use NIS netgroups in your entry. To specify a netgroup in your exports file, simply prepend the name of the netgroup with an "@".
Third, you can use wildcards such as *.foo.com or 192.168. instead of hostnames. There were problems with wildcard implementation in the 2.2 kernel series that were fixed in kernel 2.2.19.
However, you should keep in mind that any of these simplifications could cause a security risk if there are machines in your netgroup or local network that you do not trust completely.
A few cautions are in order about what cannot (or should not) be exported. First, if a directory is exported, its parent and child directories cannot be exported if they are in the same filesystem. However, exporting both should not be necessary because listing the parent directory in the /etc/exports file will cause all underlying directories within that file system to be exported.
Second, it is a poor idea to export a FAT or VFAT (i.e., MS-DOS or Windows 95/98) filesystem with NFS. FAT is not designed for use on a multi-user machine, and as a result, operations that depend on permissions will not work well. Moreover, some of the underlying filesystem design is reported to work poorly with NFS's expectations.
Third, device or other special files may not export correctly to non-Linux clients.

2.2. /etc/hosts.allow and /etc/hosts.deny

These two files specify which computers on the network can use services on your machine. Each line of the file contains a single entry listing a service and a set of machines. When the server gets a request from a machine, it does the following:
  1. It first checks hosts.allow to see if the machine matches a rule listed here. If it does, then the machine is allowed access.
  2. If the machine does not match an entry in hosts.allow the server then checks hosts.deny to see if the client matches a rule listed there. If it does then the machine is denied access.
  3. If the client matches no listings in either file, then it is allowed access.
In addition to controlling access to services handled by inetd (such as telnet and FTP), this file can also control access to NFS by restricting connections to the daemons that provide NFS services. Restrictions are done on a per-service basis.
The first daemon to restrict access to is the portmapper. This daemon essentially just tells requesting clients how to find all the NFS services on the system. Restricting access to the portmapper is the best defense against someone breaking into your system through NFS because completely unauthorized clients won't know where to find the NFS daemons. However, there are two things to watch out for. First, restricting portmapper isn't enough if the intruder already knows for some reason how to find those daemons. And second, if you are running NIS, restricting portmapper will also restrict requests to NIS. That should usually be harmless since you usually want to restrict NFS and NIS in a similar way, but just be cautioned. (Running NIS is generally a good idea if you are running NFS, because the client machines need a way of knowing who owns what files on the exported volumes. Of course there are other ways of doing this such as syncing password files.
In general it is a good idea with NFS (as with most internet services) to explicitly deny access to IP addresses that you don't need to allow access to.
The first step in doing this is to add the followng entry to /etc/hosts.deny:
portmap:ALL
Starting with nfs-utils 0.2.0, you can be a bit more careful by controlling access to individual daemons. It's a good precaution since an intruder will often be able to weasel around the portmapper. If you have a newer version of nfs-utils, add entries for each of the NFS daemons (see the next section to find out what these daemons are; for now just put entries for them in hosts.deny):
lockd:ALL
mountd:ALL
rquotad:ALL
statd:ALL
Even if you have an older version of nfs-utils, adding these entries is at worst harmless (since they will just be ignored) and at best will save you some trouble when you upgrade. Some sys admins choose to put the entry ALL:ALL in the file /etc/hosts.deny, which causes any service that looks at these files to deny access to all hosts unless it is explicitly allowed. While this is more secure behavior, it may also get you in trouble when you are installing new services, you forget you put it there, and you can't figure out for the life of you why they won't work.
Next, we need to add an entry to hosts.allow to give any hosts access that we want to have access. (If we just leave the above lines in hosts.deny then nobody will have access to NFS.) Entries in hosts.allow follow the format:
service: host [or network/netmask] , host [or network/netmask]
Here, host is IP address of a potential client; it may be possible in some versions to use the DNS name of the host, but it is strongly discouraged.
Suppose we have the setup above and we just want to allow access to slave1.foo.com and slave2.foo.com, and suppose that the IP addresses of these machines are 192.168.0.1 and 192.168.0.2, respectively. We could add the following entry to /etc/hosts.allow:
portmap: 192.168.0.1 , 192.168.0.2
For recent nfs-utils versions, we would also add the following (again, these entries are harmless even if they are not supported):
lockd: 192.168.0.1 , 192.168.0.2
rquotad: 192.168.0.1 , 192.168.0.2
mountd: 192.168.0.1 , 192.168.0.2
statd: 192.168.0.1 , 192.168.0.2
If you intend to run NFS on a large number of machines in a local network, /etc/hosts.allow also allows for network/netmask style entries in the same manner as /etc/exports above.

3. Getting the services Started

3.1. Pre-requisites

The NFS server should now be configured and we can start it running. First, you will need to have the appropriate packages installed. This consists mainly of a new enough kernel and a new enough version of the nfs-utils package.
Next, before you can start NFS, you will need to have TCP/IP networking functioning correctly on your machine. If you can use telnet, FTP, and so on, then chances are your TCP networking is fine.
That said, with most recent Linux distributions you may be able to get NFS up and running simply by rebooting your machine, and the startup scripts should detect that you have set up your /etc/exports file and will start up NFS correctly. If you try this, see “Settingup an NFS Client” Verifying that NFS is running. If this does not work, or if you are not in a position to reboot your machine, then the following section will tell you which daemons need to be started in order to run NFS services. If for some reason nfsd was already running when you edited your configuration files above, you will have to flush your configuration; see “Settingup an NFS Client” for details.

3.2. Starting the Portmapper

NFS depends on the portmapper daemon, either called portmap or rpc.portmap. It will need to be started first. It should be located in /sbin but is sometimes in /usr/sbin. Most recent Linux distributions start this daemon in the boot scripts, but it is worth making sure that it is running before you begin working with NFS (just type ps aux | grep portmap).

3.3. The Daemons

NFS serving is taken care of by five daemons: rpc.nfsd, which does most of the work; rpc.lockd and rpc.statd, which handle file locking; rpc.mountd, which handles the initial mount requests, and rpc.rquotad, which handles user file quotas on exported volumes. Starting with 2.2.18, lockd is called by nfsd upon demand, so you do not need to worry about starting it yourself. statd will need to be started separately. Most recent Linux distributions will have startup scripts for these daemons.
The daemons are all part of the nfs-utils package, and may be either in the /sbin directory or the /usr/sbin directory.
If your distribution does not include them in the startup scripts, then then you should add them, configured to start in the following order:
rpc.portmap
rpc.mountd, rpc.nfsd
rpc.statd, rpc.lockd (if necessary), and
rpc.rquotad
The nfs-utils package has sample startup scripts for RedHat and Debian. If you are using a different distribution, in general you can just copy the RedHat script, but you will probably have to take out the line that says:
. ../init.d/functions
to avoid getting error messages.

3.4. Verifying that NFS is running

To do this, query the portmapper with the command rpcinfo quota to find out what services it is providing. You should get something like this:
program vers proto   port
100000    2   tcp    111  portmapper
100000    2   udp    111  portmapper
100011    1   udp    749  rquotad
100011    2   udp    749  rquotad
100005    1   udp    759  mountd
100005    1   tcp    761  mountd
100005    2   udp    764  mountd
100005    2   tcp    766  mountd
100005    3   udp    769  mountd
100005    3   tcp    771  mountd
100003    2   udp   2049  nfs
100003    3   udp   2049  nfs
300019    1   tcp    830  amd
300019    1   udp    831  amd
100024    1   udp    944  status
100024    1   tcp    946  status
100021    1   udp   1042  nlockmgr
100021    3   udp   1042  nlockmgr
100021    4   udp   1042  nlockmgr
100021    1   tcp   1629  nlockmgr
100021    3   tcp   1629  nlockmgr
100021    4   tcp   1629  nlockmgr
This says that we have NFS versions 2 and 3, rpc.statd version 1, network lock manager (the service name for rpc.lockd) versions 1, 3, and 4. There are also different service listings depending on whether NFS is travelling over TCP or UDP. Linux systems use UDP by default unless TCP is explicitly requested; however other OSes such as Solaris default to TCP.
If you do not at least see a line that says portmapper, a line that says nfs, and a line that says mountd then you will need to backtrack and try again to start up the daemons
If you do see these services listed, then you should be ready to set up NFS clients to access files from your server.

3.5. Making Changes to /etc/exports later on

If you come back and change your /etc/exports file, the changes you make may not take effect immediately. You should run the command exportfs -ra to force nfsd to re-read the /etc/exports file. If you can't find the exportfs command, then you can kill nfsd with the -HUP flag (see the man pages for kill for details).
If that still doesn't work, don't forget to check hosts.allow to make sure you haven't forgotten to list any new client machines there. Also check the host listings on any firewalls you may have set up

Setting up an NFS Client


Setting up an NFS Client

1. Mounting Remote Directories

Before beginning, you should double-check to make sure your mount program is new enough (version 2.10m if you want to use Version 3 NFS), and that the client machine supports NFS mounting, though most standard distributions do. If you are using a 2.2 or later kernel with the /proc filesystem you can check the latter by reading the file /proc/filesystems and making sure there is a line containing nfs. If not, typing insmod nfs may make it magically appear if NFS has been compiled as a module; otherwise, you will need to build (or download) a kernel that has NFS support built in. In general, kernels that do not have NFS compiled in will give a very specific error when the mount command below is run.
To begin using machine as an NFS client, you will need the portmapper running on that machine, and to use NFS file locking, you will also need rpc.statd and rpc.lockd running on both the client and the server.
With portmap, lockd, and statd running, you should now be able to mount the remote directory from your server just the way you mount a local hard drive, with the mount command. Continuing our example from the previous section, suppose our server above is called master.foo.com,and we want to mount the /home directory on slave1.foo.com. Then, all we have to do, from the root prompt on slave1.foo.com, is type:
# mount master.foo.com:/home /mnt/home
and the directory /home on master will appear as the directory /mnt/home on slave1. (Note that this assumes we have created the directory /mnt/home as an empty mount point beforehand.)
You can get unmount the file system by typing:
# umount /mnt/home
Just like you would for a local file system.

2. Getting NFS File Systems to be Mounted at Boot Time

NFS file systems can be added to your /etc/fstab file the same way local file systems can, so that they mount when your system starts up. The only difference is that the file system type will be set to nfs and the dump and fsck order (the last two entries) will have to be set to zero. So for our example above, the entry in /etc/fstab would look like:
# device       mountpoint     fs-type     options      dump fsckorder
...
master.foo.com:/home  /mnt    nfs          rw            0    0
...
See the man pages for fstab if you are unfamiliar with the syntax of this file. If you are using an automounter such as amd or autofs, the options in the corresponding fields of your mount listings should look very similar if not identical.
At this point you should have NFS working, though a few tweaks may still be necessary to get it to work well.

3. Mount Options

3.1. Soft versus Hard Mounting

There are some options you should consider adding at once. They govern the way the NFS client handles a server crash or network outage. One of the cool things about NFS is that it can handle this gracefully. If you set up the clients right. There are two distinct failure modes:
soft
If a file request fails, the NFS client will report an error to the process on the client machine requesting the file access. Some programs can handle this with composure, most won't. We do not recommend using this setting; it is a recipe for corrupted files and lost data. You should especially not use this for mail disks --- if you value your mail, that is.
hard
The program accessing a file on a NFS mounted file system will hang when the server crashes. The process cannot be interrupted or killed (except by a "sure kill") unless you also specify intr. When the NFS server is back online the program will continue undisturbed from where it was. We recommend using hard,intr on all NFS mounted file systems.
Picking up from the previous example, the fstab would now look like:
# device             mountpoint  fs-type    options    dump fsckord
...
master.foo.com:/home  /mnt/home   nfs      rw,hard,intr  0     0
...
The rsize and wsize mount options specify the size of the chunks of data that the client and server pass back and forth to each other.
The defaults may be too big or to small; there is no size that works well on all or most setups. On the one hand, some combinations of Linux kernels and network cards (largely on older machines) cannot handle blocks that large. On the other hand, if they can handle larger blocks, a bigger size might be faster.
Getting the block size right is an important factor in performance and is a must if you are planning to use the NFS server in a production environment. 

Back to NFS SERVER Setup...

Monday, April 01, 2013

portnumberlist

List of Port number in LINUX based Services


Port # / Layer Name Comment
1 tcpmux TCP port service multiplexer
7 echo Echo service
11 systat System Status service for listing connected ports
18 msp Message Send Protocol
20 ftp-data FTP data port
21 ftp File Transfer Protocol (FTP) port; sometimes used by File Service Protocol (FSP)
22 ssh Secure Shell (SSH) service
23 telnet The Telnet service
25 smtp Simple Mail Transfer Protocol (SMTP)
37 time Time Protocol
42 nameserver Internet Name Service
53 domain domain name services (such as BIND)
67 bootps Bootstrap Protocol (BOOTP) services; also used by Dynamic Host Configuration Protocol (DHCP) services
68 bootpc Bootstrap (BOOTP) client; also used by Dynamic Host Control Protocol (DHCP) clients
80 http HyperText Transfer Protocol (HTTP) for World Wide Web (WWW) services
110 pop3 Post Office Protocol version 3
115 sftp Secure File Transfer Protocol (SFTP) services
123 ntp Network Time Protocol (NTP)
143 imap Internet Message Access Protocol (IMAP)
161 snmp Simple Network Management Protocol (SNMP)
174 mailq MAILQ email transport queue
177 xdmcp X Display Manager Control Protocol (XDMCP)
194 irc Internet Relay Chat (IRC)
209 qmtp Quick Mail Transfer Protocol (QMTP)
220 imap3 Internet Message Access Protocol version 3
389 ldap Lightweight Directory Access Protocol (LDAP)
443 https Secure Hypertext Transfer Protocol (HTTP)
546 dhcpv6-client Dynamic Host Configuration Protocol (DHCP) version 6 client
547 dhcpv6-server Dynamic Host Configuration Protocol (DHCP) version 6 Service
565 whoami whoami user ID listing
636 ldaps Lightweight Directory Access Protocol over Secure Sockets Layer (LDAPS)
873 rsync rsync file transfer services
992 telnets Telnet over Secure Sockets Layer (TelnetS)
993 imaps Internet Message Access Protocol over Secure Sockets Layer (IMAPS)
995 pop3s Post Office Protocol version 3 over Secure Sockets Layer (POP3S)
2049 Nfs NFS(nfsd, rpc.nfsd, rpc, portmap)
3306 Mysql Mysql connection default port
3690 SVN SVN Correction port

Wednesday, March 06, 2013

Adding new hard disk in LINUX BOX

Adding a Hard Drive in LINUX BOX



Adding New Drives

There are many reasons why you would need to add a new drive to your Linux box. You might have out-grown your current space limitations, or you may want to add a separate drive for a specific project or service. In any case, if you follow this guide, you should have no problems. First, you must be familiar with the naming scheme Linux uses for your drives.

 

Creating, Mounting, and Configuration New Partitions

Before adding an extra drive, this machine had 2 physical drives. Both of them were named accordingly (sda and sdb) before the new drive was added. The second drive containing the swap partitions was automatically renamed when the new drive was added. Notice the command and output below:
[root@linuxshelf root]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             8.3G  2.4G  5.5G  30% /
/dev/sda2              99M   26M   69M  27% /boot
/dev/sdc1              16G   13G  2.3G  85% /export  <-- old sdb renamed to sdc by the Linux
none                  250M     0  250M   0% /dev/shm
[root@linuxshelf root]# 

This command simply lists all currently mounted drives, their names, and space usage. Notice that sdb is not presently mounted. However, we know that it exists otherwise, there would not be an sdc present. I could not add my new drive as sdc because my SCSI hotswap drive cage reserves the first two slots for 1.5" drives. So I was forced to make the new drive sdb because it is a 1.5" drive.

 

Setting Partitions

You should be fairly familiar with fdisk. The commands are somewhat different than it's DOS equivalent. See the following commands and output:
[root@linuxshelf root]# fdisk /dev/sdb

Command (m for help): m
Command action
   a   toggle a bootable flag
   b   edit bsd disklabel
   c   toggle the dos compatibility flag
   d   delete a partition
   l   list known partition types
   m   print this menu
   n   add a new partition
   o   create a new empty DOS partition table
   p   print the partition table
   q   quit without saving changes
   s   create a new empty Sun disklabel
   t   change a partition's system id
   u   change display/entry units
   v   verify the partition table
   w   write table to disk and exit
   x   extra functionality (experts only)

Command (m for help): 

If there is a problem, and there is no drive associated with /dev/sdb, you will get an error message. Remember, that nothing will actually be executed until you issue a w command. It's always a good idea to read through the variables of your commands. Doing so will ensure that you aren't forgetting anything. Let's get started!
Command (m for help): p

Disk /dev/sdb: 50.0 GB, 50019202560 bytes
255 heads, 63 sectors/track, 6081 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System

Command (m for help): 

If you issue a p command, you will see any partitions that currently exist on the drive. You can see by the output above there are no existing partitions. This drive is un-partitionedd and unformatted. To create a new partition, is the n command.
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-6081, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-6081, default 6081): 6081

Command (m for help): 

In the output above notice that interval I selected for the cylinders. Using the entire range allows you create one partition across the entire drive. So, in order to create a primary partition on /dev/sdb/ we issued the following commands:
  • n (creates a new partition)
  • p (creates a primary partition)
  • 1 (the number 1 denotes the partition will be /dev/sdb1)
We can check the partition specifications we just entered by using the p command again.
Command (m for help): p

Disk /dev/sdb: 50.0 GB, 50019202560 bytes
255 heads, 63 sectors/track, 6081 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sdb1             1      6081  48845601   83  Linux

Command (m for help): 

Notice the new partition (highlighted in red). However, we must issue a w command to finalize it. If you messed anything up, you can use the d command and specify which partition you want to delete.
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@linuxshelf root]# 
 

 

Formatting

Now that the partition has been created, you need to format the drive. You can format it with almost any file system you wish. However, the most common Linux formats are ext2 and ext3. Ext3 is simply a candy coated version of ext2 that adds a logging feature. You must specify which partition to format by calling the device and partition number like this:
[root@linuxshelf root]# mkfs -t ext3 /dev/sdb1
mke2fs 1.32 (09-Nov-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
6111232 inodes, 12211400 blocks
610570 blocks (5.00%) reserved for the super user
First data block=0
373 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424

Writing inode tables: done                            
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@linuxshelf root]# 

What did we do there? Using the mkfs (make file system) command, we specified the type (using the -t) ext3 using the device and partition name (/dev/sdb1). You have successfull partitioned and formatted your new drive. Wait, you're not done yet. You will want to mount this partition to make it usable. You will also want this partition to mount automatically when you reboot the machine.

 

Mounting

In order to automatically mount a partition, you must edit the /etc/fstab file. The fstab file tells Linux where to mount all partitions located within the system. The output below shows the current fstab file before including the newly added drive:
[root@linuxshelf root]# vi /etc/fstab
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
LABEL=/export           /export                 ext3    defaults        1 2
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/sdb2               swap                    swap    defaults        0 0
/dev/cdrom              /mnt/cdrom              udf,iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0 0

You may notice I viewed this file using vi. Vi is a simple text editor that may or may not be loaded on your Linux system. It is somewhat similar to emacs. In any case, both programs can perform the same task. We will mount the new partition as /media. Remember to create a directory named media, otherwise fstab won't be able to mount the partition. It is shown high-lighted red in the output below:
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
none                    /dev/pts                devpts  gid=5,mode=620  0 0
LABEL=/export           /export                 ext3    defaults        1 2
none                    /proc                   proc    defaults        0 0
none                    /dev/shm                tmpfs   defaults        0 0
/dev/sdb1               /media                  ext3    defaults        1 2
/dev/sdb2               swap                    swap    defaults        0 0
/dev/cdrom              /mnt/cdrom              udf,iso9660 noauto,owner,kudzu,ro 0 0
/dev/fd0                /mnt/floppy             auto    noauto,owner,kudzu 0 0

Next, issue a simple mount command providing the partition name:
[root@linuxshelf export]# mount /dev/sdb1
[root@linuxshelf export]#

You're all done! You will be able to access the /media folder immediately and after the machine reboots as fstab will automatically re-mount it for you. If you want to verify the partition is successfully present and mounted, use the following commands:
[root@linuxshelf media]# mount
/dev/sda1 on / type ext3 (rw)
none on /proc type proc (rw)
none on /dev/pts type devpts (rw,gid=5,mode=620)
usbdevfs on /proc/bus/usb type usbdevfs (rw)
/dev/sda2 on /boot type ext3 (rw)
/dev/sdc1 on /export type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/sdb1 on /media type ext3 (rw)
[root@linuxshelf media]# 

The red line shows our new drive freshly mounted. You can check the space usage if you issue the following command.
[root@linuxshelf media]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             8.3G  2.4G  5.5G  30% /
/dev/sda2              99M   26M   69M  27% /boot
/dev/sdc1              16G   13G  2.3G  85% /export
none                  250M     0  250M   0% /dev/shm
/dev/sdb1              46G   33M   44G   1% /media
[root@linuxshelf media]# 

Thanks for ReadingTOP

Saturday, January 19, 2013

VI tips


Vi Tips and techniques

Why you still hate vi

  *  Your fingers still hit the wrong keys sometimes.
  *  You forget which mode you are in.

Facts
   
        We are stuck with vi. But in return it is powerful and designed to minimize the hand movement needed to change a text file. As a result it uses modes. This means that every key has several meanings depending on the internal state vi. In the command mode each key is a command or part of a command. I've listed some of these under below. Some keys in command mode shift you into other modes. For instance "i" gets you into insert mode, and you stay there until you tap the <Esc> key (or CTRL/[ if you don't have an Esc Key!).





Thursday, March 01, 2012

How to create a self-signed SSL Certificate ...


How to create a self-signed SSL Certificate ...

...  which can be used for testing purposes or internal usage

Overview
The following is an extremely simplified view of how SSL is implemented and what part the certificate plays in the entire process.
Normal web traffic is sent unencrypted over the Internet. That is, anyone with access to the right tools can snoop all of that traffic. Obviously, this can lead to problems, especially where security and privacy is necessary, such as in credit card data and bank transactions. The Secure Socket Layer is used to encrypt the data stream between the web server and the web client (the browser).
SSL makes use of what is known as asymmetric cryptography, commonly referred to as public key cryptography (PKI). With public key cryptography, two keys are created, one public, one private. Anything encrypted with either key can only be decrypted with its corresponding key. Thus if a message or data stream were encrypted with the server's private key, it can be decrypted only using its corresponding public key, ensuring that the data only could have come from the server.
If SSL utilizes public key cryptography to encrypt the data stream traveling over the Internet, why is a certificate necessary? The technical answer to that question is that a certificate is not really necessary - the data is secure and cannot easily be decrypted by a third party. However, certificates do serve a crucial role in the communication process. The certificate, signed by a trusted Certificate Authority (CA), ensures that the certificate holder is really who he claims to be. Without a trusted signed certificate, your data may be encrypted, however, the party you are communicating with may not be whom you think. Without certificates, impersonation attacks would be much more common.
Step 1: Generate a Private Key
The openssl toolkit is used to generate an RSA Private Key and CSR (Certificate Signing Request). It can also be used to generate self-signed certificates which can be used for testing purposes or internal usage.
The first step is to create your RSA Private Key. This key is a 1024 bit RSA key which is encrypted using Triple-DES and stored in a PEM format so that it is readable as ASCII text.


[danie@localhost ~] $ sudo su -
[root@localhost ~] # cd /etc/httpd/conf/

[root@localhost conf] #openssl genrsa -des3 -out server.key 1024

Generating RSA private key, 1024 bit long modulus
.........................................................++++++
........++++++
e is 65537 (0x10001)
Enter PEM pass phrase:
Verifying password - Enter PEM pass phrase:

Step 2: Generate a CSR (Certificate Signing Request)
Once the private key is generated a Certificate Signing Request can be generated. The CSR is then used in one of two ways. Ideally, the CSR will be sent to a Certificate Authority, such as Thawte or Verisign who will verify the identity of the requestor and issue a signed certificate. The second option is to self-sign the CSR, which will be demonstrated in the next section.
During the generation of the CSR, you will be prompted for several pieces of information. These are the X.509 attributes of the certificate. One of the prompts will be for "Common Name (e.g., YOUR name)". It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. If the website to be protected will be http://linuxshelf.blogspot.com, then enter linuxshelf.com at this prompt. The command to generate the CSR is as follows:


[root@localhost conf] #openssl req -new -key server.key -out server.csr

Country Name (2 letter code) [GB]:IN
State or Province Name (full name) [Berkshire]:Tamilnadu
Locality Name (eg, city) [Newbury]:Chennai
Organization Name (eg, company) [My Company Ltd]:GIG INFOTECH
Organizational Unit Name (eg, section) []:Information Technology
Common Name (eg, your name or your server's hostname) []:linuxshelf.com
Email Address []:linuxshelf at gmail dotcom
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
 
Step 3: Remove Passphrase from Key
One unfortunate side-effect of the pass-phrased private key is that Apache will ask for the pass-phrase each time the web server is started. Obviously this is not necessarily convenient as someone will not always be around to type in the pass-phrase, such as after a reboot or crash. mod_ssl includes the ability to use an external program in place of the built-in pass-phrase dialog, however, this is not necessarily the most secure option either. It is possible to remove the Triple-DES encryption from the key, thereby no longer needing to type in a pass-phrase. If the private key is no longer encrypted, it is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. With that being said, use the following command to remove the pass-phrase from the key:


[root@localhost conf] #cp server.key server.key.org 
[root@localhost conf] #openssl rsa -in server.key.org -out server.key

The newly created server.key file has no more passphrase in it.
-rw-r--r-- 1 root root 745 Jun 29 12:19 server.csr
-rw-r--r-- 1 root root 891 Jun 29 13:22 server.key
-rw-r--r-- 1 root root 963 Jun 29 13:22 server.key.org

 
Step 4: Generating a Self-Signed Certificate
At this point you will need to generate a self-signed certificate because you either don't plan on having your certificate signed by a CA, or you wish to test your new SSL implementation while the CA is signing your certificate. This temporary certificate will generate an error in the client browser to the effect that the signing certificate authority is unknown and not trusted.
To generate a temporary certificate which is good for 365 days, issue the following command:
[root@localhost conf] #openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

Signature ok
subject=/C=IN/ST=Tamilnadu/L=Chennai/O=GIG INFOTECH/OU=Information
Technology/CN=linuxshelf.com/Email=linuxshelf at gmail dot com
Getting Private key
 
Step 5: Installing the Private Key and Certificate
When Apache with mod_ssl is installed, it creates several directories in the Apache config directory. The location of this directory will differ depending on how Apache was compiled.
[root@localhost conf] #mkdir ssl.crt
[root@localhost conf] #mkdir ssl.key 

[root@localhost conf] #cp /etc/httpd/conf/server.crt /etc/httpd/conf/ssl.crt/ssl.crt 
[root@localhost conf] #cp /etc/httpd/conf/server.key /etc/httpd/conf/ssl.key/ssl.key
 
Step 6: Configuring SSL Enabled Virtual Hosts
 
[root@localhost conf] #vi /etc/httpd/conf.d/ssl.conf
 
SSLEngine on
SSLCertificateFile /etc/httpd/conf/ssl.crt/server.crt
SSLCertificateKeyFile
/etc/httpd/conf/ssl.key/server.key
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
CustomLog logs/ssl_request_log \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
 
Step 7: Restart Apache and Test
/etc/init.d/httpd stop (or ) service httpd stop
/etc/init.d/httpd start (or) service httpd start


Thanks for ReadingTOP
Back To Top