What is IP address "overlapping" within the context of NAT ?

network internet

IP address overlapping refers to a situation where two locations that want to interconnect are both using the same IP address scheme. This is not an unusual occurrence; it often happens when companies merge or are acquired. Without special support, the two locations will not be able to connect and establish sessions. The overlapped IP address can be a public address assigned to another company, a private address assigned to another company, or can come from the range of private addresses as defined in RFC 1918. Private IP addresses are unroutable and require NAT translations to allow connections to the outside world. The solution involves intercepting Domain Name System (DNS) name-query responses from the outside to the inside, setting up a translation for the outside address, and fixing up the DNS response before forwarding it to the inside host. A DNS server is required to be involved on both sides of the NAT device to resolve users
wanting to have connection between both networks. NAT is able to inspect and perform address translation on the contents of DNS “A” and “PTR” records, as shown in Using NAT in Overlapping Networks.

What is Service Provider PAT Port Allocation Enhancement for RTP and RTCP?

network internet

The Service Provider PAT Port Allocation Enhancement for RTP and RTCP feature ensures that for SIP, H.323, and Skinny voice calls. The port numbers used for RTP streams are even port numbers and the RTCP streams are the next subsequent odd port number. The port number is translated to a number within the range specified conforming to RFC-1889. A call with a port number within the range will result in a PAT translation to another port number within this range. Likewise, a PAT translation for a port number outside this range will not result in a translation to a number within the given range.

How to setup xdebug with Netbeans on Unbuntu

php code

xdebug is a PHP extension for debugging of php code. It supports stack and function traces, profiling information and memory allocation and script execution analysis. It uses the DBGp debugging protocol.  In this post we are going to see easy steps to install and setup it with Netbeans IDE on Ubuntu

Step 1:

Install xdebug using following command


Step 2:

Browse to your php folder and locate xdebug.so file. In my case it is /usr/lib/php5/20121212/xdebug.so. Copy this files path.

Step 3:

Open your php.ini ( /etc/php5/apache2/php.ini ) and add following lines at the end of this file

Replace zend_extension path with the path you have copied in previous step

Step 4:

Restart your apache server using

Step 5:

Open Neatbeans and goto Tools > Options > PHP > Debugging and make sure that there is Debug Port set to 9000
and Session ID to “netbeans-xdebug

Step 6:

Thats it you are almost done. Now just create a new php project, add some breakpoints to any php file in it and click on Debug Project from tool bar or press ctrl+F5. This will start debugger.


Debugging Magento Project using xdebug with Netbeans on Ubuntu

How to optimize Magento website performance


Magento if the most widely used e-commerce framework. In this post I have mentioned few important points through which we can optimize overall Magento website’s performance. Most optimizations will work with any version of Magento.  Note that here I assume your customized Magento website is build by following all the recommended coding standards (For more information about Magento coding standards please visit http://devdocs.magento.com/guides/v2.0/coding-standards/bk-coding-standards.html). So, following are important points/guidelines you are supposed to follow to improve Magento performance.

  • Enable Cache through Magento Admin Panel.
  • Use minified js/css and enable Merge CSS/JS through Magento Admin Panel
  • Enable code compilation after complete development. If updates, code modifications, extension installations are needed, code compilation needs to be disabled first. If not, there will be errors.
  • Set HTTP Header Field Connection : Keep-Alive through server configuration.
  • Do not take backup or place other folders in servers “/var/www/html/”  other that project’s main folder.
  • Use PNG images.
  • Apply lazy loading on images.
  • Reduce HTML source code by removing spaces, commented text and make HTML inline, this helps to render page fastly by browser.
  • Enable Flat Categories and Products: In Magento admin, (top menu) System > Configuration, (left nav) Catalog > Catalog, (main page) Frontend.  Set “Use Flat Catalog Category” and “Use Flat Catalog Product” to “Yes“. Attributes that apply to Categories and Products are stored in separate database tables depending on their datatypes. ‘Flattening’ will put all attributes in one table for Magento retrieve. This will have a positive impact on site speed especially if it has 1,000 or more products.
  • Optimize the MySQL server (enable Query Cache and tweak my.cnf parameters).
  • Enable gzip through server configuration and note that you don’t apply it on images as images are already compressed.
  • As part of database maintenance do log cleaning.
  • Use PHP Accelerators : Using a PHP accelerator is another form of caching. They increase performance of PHP scripts by caching them in their compiled state. You can use a PHP accelerator like APC, ZendOptimizer+ or Xcache.

Rules for improving web page response time :
Studies have shown that web page response time can be improved by 25% to 50% by following these rules.

How Can Many Users Share the Same Port ( TCP / HTTP Listening )

network internet

So, what happens when a server listen for incoming connections on a TCP port? For example, let’s say you have a web-server on port 80. Let’s assume that your computer has the public IP address of and the person that tries to connect to you has IP address This person can connect to you by opening a TCP socket to Simple enough.
Intuitively (and wrongly), most people assume that it looks something like this:
Local Computer  |  Remote Computer
<local_ip>:80     |  <foreign_ip>:80
🙁  not actually what happens, but this is the conceptual model a lot of people have in mind.
This is intuitive, because from the standpoint of the client, he has an IP address, and connects to a server at IP:PORT. Since the client connects to port 80, then his port must be 80 too? This is a sensible thing to think, but actually not what happens. If that were to be correct, we could only serve one user per foreign IP address. Once a remote computer connects, then he would hog the port 80 to port 80 connection, and no one else could connect.
Three things must be understood:
1. On a server, a process is listening on a port. Once it gets a connection, it hands it off to another thread. The communication never hogs the listening port.
2. Connections are uniquely identified by the OS by the following 5-tuple: (local-IP, local-port, remote-IP, remote-port, protocol). If any element in the tuple is different, then this is a completely independent connection.
3.  When a client connects to a server, it picks a random, unused high-order source port. This way, a single client can have up to ~64k connections to the server for the same destination port.
So, this is really what gets created when a client connects to a server:
Local Computer   | Remote Computer               | Role
—————————————————————————–            | <none>                                   | LISTENING         |<random_port>    | ESTABLISHED
Looking at What Actually Happens
First, let’s use netstat to see what is happening on this computer. We will use port 500 instead of 80 (because a whole bunch of stuff is happening on port 80 as it is a common port, but functionally it does not make a difference).
netstat -atnp | grep -i “:500 “
As expected, the output is blank. Now let’s start a web server:
sudo python3 -m http.server 500
Now, here is the output of running netstat again:
Proto  Recv-Q  Send-Q    Local Address           Foreign Address         State
tcp         0              0         *               LISTEN
So now there is one process that is actively listening (State: LISTEN) on port 500. The local address is, which is code for “listening for all”. An easy mistake to make is to listen on port, which will only accept connections from the current computer. So this is not a connection, this just means that a process requested to bind() to port IP, and that process is responsible for handling all connections to that port. This hints to the limitation that there can only be one process per computer listening on a port (there are ways to get around that using multiplexing, but this is a much more complicated topic). If a web-server is listening on port 80, it cannot share that port with other web-servers.
So now, let’s connect a user to our machine:
quicknet -m tcp -t localhost:500 -p Test payload.
This is a simple script (https://github.com/grokit/quickweb) that opens a TCP socket, sends the payload (“Test payload.” in this case), waits a few seconds and disconnects. Doing netstat again while this is happening displays the following:
Proto  Recv-Q  Send-Q    Local Address           Foreign Address                   State
tcp        0               0          *                        LISTEN      –
tcp        0                0        ESTABLISHED –
If you connect with another client and do netstat again, you will see the following:
Proto  Recv-Q   Send-Q        Local Address           Foreign Address                    State
tcp         0                0                *                        LISTEN      –
tcp         0                0             ESTABLISHED –
… that is, the client used another random port for the connection. So there is never confusion between the IP addresses.
Theoretically, yes. Practice, not. Most kernels (incl. linux) doesn’t allow you a second bind() to an already allocated port. It weren’t a really big patch to make this allowed.
Because you are working on an application server, it will be able to do that.

Why we need IPv6 ?


IPv6, the successor standard to IPv4, for existing networks that currently use IPv4 communications could solve the basic problem of the availability of only approximately 4.3 billion IPv4 addresses. Ceasing use of IPv4 addresses and only using IPv6 addresses, however, is impractical and thus maintaining an environment that enables mutual communication between IPv4 and IPv6 (dual stack environment) will be required for a certain period. Completing IPv6 support is therefore considered to require a large amount of both time and money.

IPv6 addresses the main problem of IPv4, that is, the exhaustion of addresses to connect computers or host in a packet-switched network. IPv6 has a very large address space and consists of 128 bits as compared to 32 bits in IPv4. Therefore, it is now possible to support 2^128 unique IP addresses, a substantial increase in number of computers that can be addressed with the help of IPv6 addressing scheme. It is widely expected that the Internet will use IPv4 alongside IPv6 for the foreseeable future. IPv4-only and IPv6-only nodes cannot communicate directly, and need assistance from an intermediary gateway or must use other transition mechanisms.

Use of Software Reverse Engineering

Finding malicious code: Many virus and malware detection techniques use reverse engineering to understand how abhorrent code is structured and functions. Through Reversing, recognizable patterns emerge that can be used as signatures to drive economical detectors and code scanners.

Discovering unexpected flaws and faults: Even the well-designed system can have holes that result from the nature of our “forward engineering” development techniques. Reverse engineering can help identify flaws and faults before they become mission-critical software failures.

Finding the use of others’ code:   In supporting the cognizant use of intellectual property, it is important to understand where protected code or techniques are used in applications. Reverse engineering techniques can be used to detect the presence or absence of software elements of concern.

Finding the use of shareware and open source code where it was not intended to be used. In the opposite of the infringing code concern, if a product is intended for security or proprietary use, the presence of publicly available code can be of concern. Reverse engineering enables the detection of code replication issues.

Learning from others’ products of a different domain or purpose: Reverse engineering techniques can enable the study of advanced software approaches and allow new students to explore the products of masters. This can be a very useful way to learn and to build on a growing body of code knowledge. Many Web sites have been built by seeing what other Web sites have done. Many Web developers learned HTML and Web programming techniques by viewing the source of other sites.

Discovering features or opportunities that the original developers did not realize: Code complexity can foster new innovation. Existing techniques can be reused in new contexts. Reverse engineering can lead to new discoveries about software and new opportunities for innovation

How Web Browser Works ?

      Now days one of the most important software use for surfing web called a Web browser. Web Browsers enables a user to navigate through Web pages by  fetching those pages from some servers and subsequently displaying them on the user’s screen.  A web browser typically provides an interface by which hyperlinks are displayed in such a way that the user can easily select them through a single mouse click. In old days web browser used to be simple program but now days thing are changed, browsers are considered to be one of the most complex piece of software. Logically,we browsers consist of several components, shown in Figure. An important aspect of Web browsers is that it should (ideally) be platform independent. This goal is often achieved by making use of standard graphical libraries, shown as the display back-end, along with standard networking libraries. The core of a browser is formed by the browser engine and the rendering engine. The latter contains all the code for properly displaying documents as we explained before. This rendering at the very least requires parsing HTML or XML, but may also require script interpretation. In most case, there is only an interpreter for JavaScript included, but in theory other interpreters may be included as well. The browser engine provides the mechanisms for an end-user to go over a document, select parts of it, activate hyperlinks, etc.

One of the problems that Web browser designers have to face is that a browser should be easily extensible so that it, in principle, can support any type of document that is returned by a server. The approach followed in most cases is to offer facilities for what are known as plug-ins such as Adobe Flash Player.

What is a plugin ?
A plug-in is a small program that can be dynamically loaded into a browser for handling a specific document type or content. The latter is generally given as a MIME type. A plug-in should be locally available. Possibly after being specifically transferred by a user from a remote server. Plug-ins normally offer a standard interface to the browser and, likewise, expect a standard interface from the browser. Logically, they form an extension of the rendering engine shown in Figure.

Another client-side process that is often used is a Web proxy. Originally, such a process was used to allow a browser to handle application-level protocols other than HTTP. For example, to transfer a file from a FTP server, the browser can issue an HTTP request to a local FTP proxy, which will then fetch the file and return it embedded as HTTP.

By now most Web browsers are capable of supporting a variety of protocols, or can otherwise be dynamically extended to do so and for that reason do not need proxies. However, proxies are still used for other reasons. For example, a proxy can be configured for filtering requests and responses (bringing it close to an application-level firewall), logging, compression, but most of all caching. We return to proxy caching below. A widely used Web proxy is Squid, which has been developed as an open-source project.