Skip to content

How to Setup DNS client / Name Server IP?

As I already write so many posts about DNS and its working so you can read them by surfing blog. Domain name converts the name of website (http://www.a7host.com) to an IP address (173.0.139.139). A web address or IP address is not only the name of website but helps to route traffic over internet.
Today I am going to post about DNS configuration.

New users of linux find it difficult to setup / modify name server addresses with Name servers (NS1 / NS2). Use /etc/hosts file for small and local resolution files. As we know domain name servers are responsible for website names like a7host.com of IP address 173.0.139.139 because it is easy to remember a name instead of IP address. To configure Linux DNS, we have to modify /etc/resolv.conf file. This file describe about name servers to use. We need to point out with ISP and DNS servers to setup linux and network services like www or smtp.

We have to modify resolv.conf to set name of IP address and resolv.conf contains the IP address of name servers (DNS name resolvers) that helps to translate name into IP (internet protocol) for available node on the network.

Setup DNS Name resolution

Use (su) command to login into root folder to setup linux as DNS client.
Step # 1: Open /etc/resolv.conf file:
# vi /etc/resolv.conf

Step #2: Add your ISP nameserver as follows:

search isp.com
nameserver 202.54.1.110
nameserver 202.54.1.112
nameserver 202.54.1.115
Note Max. three name server can be used/defined at a time.

Step # 3:Test setup ns lookup or dig command:

$ dig http://www.nixcraft.com
$ nslookup http://www.nixcraft.com

Here are some more glossary terms to understand.

DNS Client: – DNS client is responsible to configure file /etc/resolv.conf, which defines the IP address of DNS server.

Bind: – Bind is the suite of DNS related software’s that runs under linux and acronym of Berkeley Name Domain Project.

DNS Resolution Process and How DNS Server Works?

DNS servers are responsible to find out an IP address requested from client side. DNS is a set of protocols built to connect computers and known as TCP/IP. DNS works as computer GPS on internet, its job is to translate user friendly URL to an IP address like http://www.a7host.com into “173.0.139.139” and retrieving it from host. When users type website address in their browsers like Mozilla, Chrome, IE etc, Application of these browsers resolve DNS into IP address and look for it, this process is called forward DNS lookup or reverse lookup.

Forward lookup is the method of getting an IP address from DNS (domain name server) and Reverse lookup is to retrieve DNS from IP addresses. The Name server services are provided by 13 DNS servers around the world. There are total 13 Root Name servers in the world which provide name server details. DNS servers are like address books and notepads that keep the maps of IP addresses. Each device connected on internet has a unique IP address to route and retrieve information. These IP addresses are similar to phone numbers but due to DNS we don’t need to remember them in numerical form.

The process of changing or getting IP from DNS name is known as DNS Resolution or hostname resolution and the working algorithm is identified by resolver.

There are many steps performed by DNS servers or user clients to retrieve data.

  1. First of all browsers or local applications check local machine for requested domain name, if it is available then it terminate its process.
  2. If the requested domain is unavailable at local machine it send it to NS server to find associated IP address.
  3. After getting a request of IP from client, NS server checks its local cache to find weather the requested IP is recently looked up or not. If it is available in local cache then it response from there.

NS are the name servers of each country and organization. Every NS has information about its domain machines and other name servers. Top level domains (.com, .edu etc) contain the information of root name servers.

  1. Name servers checks whether domain address is local or not.
  2. Once name servers get know about domain type then they strips out the TLD (top level domain) and queries for a root like .in. NS asks to other NS who is responsible for .in.
  3. Now NS strips next higher domain and ask for its IP from responsible NS. It will return an answer say a NS with Numeric IP address.
  4. Now next higher domain is also striped out and asks from respective NS. An IP will come up with respective domain name.
  5. Now the final step is here and it ask for IP address retrieved from last step to domain name and the final IP will show.
  6. NS returns result to the application.
  7. Final results are stored in local cache of NS with expiration date to avoid fresh lookup again.

Importance of Unique IP Address

IP address is the unique address of a website or network. It is IP address that distinguishes your website among others. An IP address is the combination of 4 numbers that are unique among others. IP address represent a website name in numerical form like google.com goes for 73.14.213.99 etc. To locate or open a website we type its domain name that is associated with IP address on server.

The most dangerous cause not to use same IP address of more than one website is if a website of them get ban or spam by search engine like Google then all of other websites hosted on same IP address also get banned. This is the most negative point of using same IP address for different websites. Website with unique IP address is more reliable and safe than shared IP websites. A web server hosts more then 1 one website on their server so due to lack of available IP addresses sometime they assign all websites same IP address. A unique IP address website is unaffected by other website’s on same server so always go for a IP address that is not still shared with someone.

Due to lack internet space it was announced in 2011 that the last batch of IP address is allocated to websites. As the lack capacity of IPv4 suite it is considered to design a more powerful and wide internet suite that can hold as much as IP addresses so we don’t need to change. While considering all of this IPv6 is designed by researchers. But due to implementation of IPv4 in most web servers and ISP’s its not possible to change it entirely because an IP located on IPv4 will not be work with IPv6. The IPv6 can hold 360 undecillion IP address and available since 1999 but due its slow transaction and transferring 4.3 billion IP addresses from IPv4 to IPv6 takes time.

Web Servers and their working Phenomena

Web servers are responsible to successfully run websites on internet. Internet is a system under which millions of computer networks interconnected together and exercised the IP suite which is known as TCP/IP to provide billions of users universally. These computers share their resources, data and network with each other under the surveillance of various security protocols like HTTP, FTP and TCP etc. We need to store a website on web servers to run it online and to access in all across world. A web server is combination of high configured hardware and high tech software’s those are fully responsible to give you access of stored files. HTTP is known as hyper text transfer protocol (hyper text transfer protocol) which is mainly used to transfer files across world via considering internet. HTTP takes the help of TCP (Transfer control protocol) to request and response from browser or client side.

It is web server that responds to client on requests. Every time either web server or web client (browser) need http to response – request information.

how web server work

This is the fundamental diagram of web server working phenomena and represents how a web browser send request to web server and web server revert back to web browser by http. TCP (transmission control protocol) built a connection with web browser and web server. TCP is the suite of multiple protocols like FTP, HTTP, SMTP and responsible for authentic and reliable connection. To eliminate a connection we have to close TCP or web browser. A web browser is user end software and used to make requests to web server with IP (internet protocol) address of website or webpage. Each and every website has its unique IP address that is set up when we store website to web hosting servers.

When web browser request an IP address like (http://www.a7host.com) to web server then it check weather it is stored in its cache or not if its found in cache then it response from their otherwise send request to other DNS servers on internet to find particular IP address. It is DNS server that gives the information of IP address to web browser. Once web browser gets know about where IP address is located it requests its full information from there. As soon as web server gets a request it response it with proper information if available either revert back an appropriate error message. Now you are seeing that page with full information without knowing back-end process. This is the basic working phenomena of web server. A web server use high tech software’s and highly configured hardware devices.

How to Block Bad Bots, Crawlers, Scrapers and Malwares?

Web hosting is the basic need to run a website on internet. Webmasters store their websites on web hosting servers like apache, IIS, cold fusion etc to access it worldwide. Internet is a wide and diverse area and its open accessibility praise and help bad bots to crawl a website. We choose optimum bandwidth while selecting a hosting plan. The selection of hosting plan depends on the bandwidth usage. A website with information context require less bandwidth as compare to live streaming or video gaming sites.

We often seen that some unidentified bots relentlessly use bandwidth of website that cause the loading speed of website become slow and affects the ranking position on search engines like Google, Yahoo, and Bing etc. Bad bots, scrapers and malwares are unwanted visitors to site and harmful to server bandwidth. These bots harness the resources of server. Here are some methods that are effective to stop this harness.

Block by Robots.txt

Robots.txt file is what that give permissions to crawlers to crawl a website. It is robots.txt file that block bad bots. This is maintained at root level under domain.com/robots.txt path and complied with useful information. Most crawlers abide the rules and conditions in robots.txt while crawling sites but some bad bots avoid the robots.txt instructions. We can edit it in notepad. Below is the basic code for robots.txt.

User-agent:*
Disallow:

These two lines allow all types of bots to crawl all pages of website. For example we want to restrict scooter bots then we write:-

User-agent: scooter
 Disallow:/

These lines restrict scooter bots to crawl our website. If you want to block only particular pages or high confidential pages then disallow them: –

User-agent: scooter
Disallow: /wp-admin/

This denies the scooter bots to crawl wp-admin page. Except wp-admin webpage, scooter bots are eligible to crawl all webpages. Some bad bots ignore the robots.txt file and don’t follow the instructions. We can Block such bad bots by .htaccess file. The .htaccess file is work and supported by apache server and very helpful to block scraper and malwares. .htaccess file allow us to block bots and scrapers by their name and IP address. We can access the name and IP address of bots from access log portion of cPanel. cPanel is the admin control panel of apache server that helps to set permissions and conditions of site.

Below is the code to block bad bots by their names.

# Redirecting offline browsers and ‘bad bots’ to a honeypot
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^AbachoBOT [OR]
RewriteCond %{HTTP_USER_AGENT} ^anarchie [OR]
RewriteCond %{HTTP_USER_AGENT} ^antibot [OR]
RewriteCond %{HTTP_USER_AGENT} ^appie [OR]
RewriteCond %{HTTP_USER_AGENT} ^ASPSeek [OR]
RewriteCond %{HTTP_USER_AGENT} ^asterias [OR]
RewriteCond %{HTTP_USER_AGENT} ^attach [OR]
RewriteCond %{HTTP_USER_AGENT} ^autoemailspider [OR]
RewriteCond %{HTTP_USER_AGENT} ^B2w [OR]
RewriteCond %{HTTP_USER_AGENT} ^BackDoorBot [OR]
RewriteCond %{HTTP_USER_AGENT} ^BackWeb [OR]
RewriteCond %{HTTP_USER_AGENT} ^Baidu [OR]
RewriteCond %{HTTP_USER_AGENT} ^Bandit [OR]
RewriteCond %{HTTP_USER_AGENT} ^BatchFTP [OR]
RewriteCond %{HTTP_USER_AGENT} ^Black\ Hole [OR]
RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR]
RewriteCond %{HTTP_USER_AGENT} ^BlowFish [OR]
RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto [OR]
RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [OR]
RewriteCond %{HTTP_USER_AGENT} ^zerxbot [OR]
RewriteCond %{HTTP_USER_AGENT} ^Zeus
RewriteRule ^(.*)$ /public_html/honeypotdirectory/honeypot.php

Above are the name of some known bad bots, you can access more bad bots from Scamalert Networ .

Block bots by IP Address

The .htaccess file is the ultimate solution to control and maintain website. We can block bots by their IP address as give below: –

deny from 95.211.21.91
deny from 94.0.0.0/8
deny from 159.226.0.0/16
deny from 202.111.175.0/24
deny from 218.7.0.0/16

You can also deny all ip address simply by :-

Deny from all

These are some solutions to block bad bots and save bandwidth of website. Web hosting companies are responsible of regularly checking of such activities on website.

How to make a Website Live on Internet

You designed and developed an enticed web portal that represents your products and services. Now its time to make it live on internet world. Besides attractive Designing and developing it’s all ineffectual if it is not live on internet. Here I am going to describe how to make a website live on internet. To make live on internet, you should have to possess three things domain name, server space and FTP. Domain name is the unique identification of your web portal that helps user to remember a website like a7host.com, here a7host.com is a domain name. Server Space is the area or disk where your website is stored for internet usage. You can’t live a website without possession of server space on web hosting servers like apache and window hosting etc. Server hosting is of different types linux hosting, windows hosting, coldfusion hosting etc. Select one among them that is suitable to website. Go on after choosing a reliable and secure web hosting. FTP is the acronym of File Transfer Protocol that builds a connection between local computer and server. It’s FTP that will upload and download files from server. Access the authenticity of FTP by FileZilla. Now you have all three things that are mandatory to get a website live. Now you purchased a domain that is easy to remember, web space to store website and installed FTP on your computer.

Follow these steps to complete this process.   

1. Run FileZilla FTP by going to all programs list or by desktop icon. It look like given image.

FTP Screen

2. Before you start it’s recommended to get know about its various areas’. Below image describe its functionality.

FTP Display Area

FTP Working Details

In this image we have four parts that are divided into four split windows. The top part of FTP is login area. We have to enter host name, user name and password in this area to connect with server. Incorrect or left blank resist us to connect with servers. Next Part shows the drives on local computer and web hosting server. Left part of screen is local disk and right part is web server.

3.  Next part shows the status of downloading and uploading files. If you have to download a file then go to server part and right click -> download. File will be downloading to its specified folder. Do same while uploading go to left side part and access file from local disk and then upload them?

These are the fundamentals of uploading a website on internet

What is Domain Name System and DNS Server?

To manage the name of websites and other internet domains, DNS (domain name system) servers are used. It’s DNS who allow a client to request human readable query in browser like a7host.com instead of 0 and 1’s. Every computer on internet allocated with unique IP address (198.123.32.3) that is impossible for human to remember. Domain Name system (a7host.com) is much easier to remember than IP address. DNS permit a user to connect to other computers on different networks by remembering their domain names. Reverse domain name system transfer the IP address into domain name and domain name into IP address. Domain name system is a collection of DNS server.

DNS servers are special purpose computers those run on special and high tech software’s. DNS server use the high tech networking software to run, features a public Internet Protocol (IP) address that consist the database of its network names and address for other hosts. DNS servers use the private network protocols to communicate. The building system of DNS servers in hierarchy order and top level servers are known as root server. The root severs store the complete database of network that include the IP address and domain names. Internet employs 13 root server’s that are maintained by various private held organizations. These servers are naming A, B, C, and D up to M.

Every company or organization has at least one server handing DNS queries to maintain a computer network. DNS servers hold the IP address of network as well as cache of IP addresses that accessed recently outside the network. Each computer on network only needs to know one name server. There are three things to happen when a computer request for IP address weather it is in network or not.

1. Quick and fast response if an IP address is registered locally.
2. Little or no wait for response if an IP address is recently requested from someone within network or organization.
3. If queries are taking from seconds to minutes that means you are first to search for this query. There is no search in the local cache for this query since last 12 hours to one week. Local server perform search from remote location servers on the behalf of your network configuration.

DNS is like an electronic book that serves you to find an IP address