RSS

Defense mechanism against black-hole and gray-hole attacks in mobile ad hoc networks

Black-hole and gray-hole attack are the two common attacks in a mobile ad hoc network. In black-hole attack, the adversary advertises false route information and absorbs the data traffic towards it, and finally drops all the packets. The Gray – hole attack is an improved version of the black – hole attack in which the adversary partially forwards and partially drops the packets. Its behavior cannot be predicted as it behaves normally for a certain time and late behaves maliciously. Both black-hole and gray-hole attacks disrupt the route discovery process and degrade system’s performance.

The most popular technique to detect the black-hole and gray-hole attack in a mobile ad hoc network is to monitor the behavior of nodes by a genuine node through overhearing the communication. This technique can be referred as local monitoring. Watchdog overhears the number of transmissions from a sender and router. Watchdog detects the malicious node by comparing the number of transmissions.

Solution in NS2

i) The network can be created in which some nodes are configured as attackers and some nodes are configured as watchdog nodes with detection mechanism.
ii) Attack report from the watchdog is utilized for the future network operations such as not involving those nodes in the data forwarding process or not electing those nodes as leader.
iii) Packet loss is the major performance issue due to attacker’s activity that can be traced out using trace file before and after applying the defense mechanism. Attack impact is also over the metrics such as Packet Delivery Ratio and Throughput. Delay is increased due to the retransmission of lost packets.

For more details visit : http://slogix.in/projects-in-mobile-ad-hoc-networks/index.html

 

Self-organizing Network Architectures and Protocols

The capacity of mobile ad hoc networks becomes a constraint due to the mutual interference of concurrent transmissions between nodes. The mobility of nodes adds another dimension of complexity in the communication process. Several works in ad hoc network studied the impact of mobility on the network capacity and suggested virtual backbone network to solve the issues in mobility management. The mobility model represents the The mobility models for mobile ad hoc networks include:

» Random walk
» Random waypoint
» Random direction mobility
» Reference point group mobility model
» Gauss-Markov
» Manhattan grid model
» Disaster area model
» Random street model

Solution in NS2
i) In NS2, mobility models such as Random way point, Random direction, Random walk and group mobility model can be modeled.
ii) The performance of the different routing protocols such as AODV, DSR and DSDV under different mobility models can be evaluated using NS2.
iii) Highly dynamic mobile adhoc network can be created using the above mentioned mobility models in order to evaluate the performance of the routing protocols under the dynamic nature of topology.
iv) Random way point mobility model can be generated using the ns~\indep-utils\cmu-scen-gen\setdest.cc file is available in ns2 for the input parameters total nodes, simulation time, pause time, network area and speed.

For Further Details Visit :  http://slogix.in/projects-in-mobile-ad-hoc-networks/index.html

 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

MAC issues in mobile ad hoc network

The media access control (MAC) is a data communication protocol and it is a sub-layer of the data link layer. It allows several nodes in the network to share the medium using the channel access control mechanisms. Collision in MAC layer is the major issue in wireless transmissions. Generally, two-way handshaking and four-way handshaking mechanism reduces the collision rate. In the two-way handshaking signal strategy, a node transmits the acknowledgement to the sender node on receiving the data packet. In the four-way handshaking signal strategy, the optimized MAC protocol uses Ready to Send/Clear to Send (RTS/CTS) technique to reduce the packet collision in wireless transmissions. The back-off algorithms also play a vital role in reducing the collision between nodes, especially if more than one node attempts to send data on the channel simultaneously. Improving the functionality of the back-off algorithms to estimate the optimal back-off waiting period is still a major issue. The MAC layer offers two classes of services, namely Distributed Coordination Function (DCF) and Point Coordination Function (PCF).

Solution in NS2

i) In NS2, the IEEE 802.11 MAC standard is applied to the network and the performance is evaluated.
ii) Four way handshaking is the default mechanism available in ns2.
iii) Performance under Two way handshaking mechanisms can be evaluated by disabling the RTS/CTS settings.
iv) Back off algorithm can be tested by varying inbuilt back-off variable and contention window size.
v) Channel access delay minimization and throughput increment can be illustrated using x-graph.
vi) Performance metrics such as frame overhead, contention overhead, delay, packet delivery ratio, dropped packets due to collision, throughput and energy consumption can be analyzed by processing the trace file using awk script.

For Further Details Visithttp://slogix.in/

 

How to Compile Hadoop Applications using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (Intelligent Java IDE) tool for building  the hadoop packages.

Steps to compile sample program in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Create hadoop application with IntelliJ IDEA

(i) Start IntelliJ IDEA.

(ii) Click Create new project

(iii) The project type is set to Java. Browse and select the Java SE  Development Kit 7 (JDK) installation folder on project SDK. Click Next.

(iv) Set the project name and project location and then click finish.

(v) In the Project Explorer, right-click the src folder. Select New –> Java Class.

3) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of the  screen. Select JARS or directories.

4) Run the project

(i) Select Run->Edit Configurations

(ii) Click on Application under “Run/Debug Configurations”

(iii) Select the Configuration tab then give the class name in “Main class” and set the input and output directory in “Program arguments” tags.

5) Compile the Hadoop Application

(i) Select Run-> Run

(ii) The outputs are generated in output directory with a file called _SUCCESS and part-r-00000.

 

For Further Details:

S-Logix

No.5, First Floor, 4th Street,

Dr. Subbarayan Nagar,

Kodambakkam,

Chennai-600 024.

Landmark : (Samiyar Madam)

Mobile: +91-81240 01111, 97109 99001

E-Mail: slogix.india@gmail.com / yahoo.in

 

 

 

 

 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

How to modify Hadoop Source Code using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (intelligent Java IDE) tool for building  the hadoop packages.

Steps to modify and build hadoop source code in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Download the Hadoop along with source code

3) Import Hadoop project

(i) Start IntelliJ IDEA.

(ii) Click Import Project

(iii) Select the hadoop version folder and the click Next

(iv) Set the project name and project location on “Import Project”  wizard and then click Next.

(v) Select the Java SE Development Kit 7 (JDK) installation folder  on  project SDK. Click finish.

(vi) Now Hadoop is successfully imported in the IntelliJ IDEA

4) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of  the screen. Select JARS or directories.

5) Modify the existing module according to the requirement and  rebuild it

6) Integrate the modified module to existing hadoop.

7) Run the Hadoop application with modified hadoop source code

(i) start hadoop daemons

(ii) Run the sample program

 

For Further Details Visithttp://slogix.in/

 

 

 

 

Tags: , ,

Big Data

Big data comprises of large volume of datasets which is very difficult to manage within the traditional computer. The big data includes huge volume, high velocity and extended variety of data.

Hadoop is an open source framework written in java which is used to manage the large volume of datasets by the clusters of computers using the mapreduce concept. Hadoop MapReduce is a software framework in which the map collects the large input of data and converts into the sets of data whereas the reduction of these datasets are performed after the map process.

HDFS

The most common file system used in the hadoop is Hadoop Distributed File System(HDFS) and it follows the master/slave technique. HDFS is a fault-tolerant and performs the parallel processing. It is designed by using the low cost hardware. It stores the metadata and the application data separately. The meta data is stored on the dedicated server called Namenode which contains the file system. The application data is stored on the other server called the Datanodes which contains the actual data. All these servers are communicate with each other using the TCP based protocols.

Hbase

Hbase is the column based distributed database management system in which the data is stored in the form of columns in the tables whereas the traditional RDBMS stores the data in the form of rows. It provides the random quick access to the huge volume of data compared to the HDFS and real time read/write access to the big data. It stores the result in the form of hash tables.
  
Tools and Technologies

JDK 1.8.0
Netbeans IDE 8.0.1
Hadoop Distributed File System
Hbase-0.94.16
Mahout
Map reduce
Hadoop-1.2.1

 

For Further Details : http://slogix.in/projects-in-big-data/index.html

 

Best ns2 Projects – SLOGIX

The network simulator (ns-2) is a software, assisting the research and development process in network domain which require large scale experiments of newbies protocols those are tedious for checking in real time.
The ns-2 package is built in with both C++ and OTcl language. The core functionalities are written in C++ and the configurations are given in OTcl. The rebuilt feature provided in the existing ns2 package accommodates new protocols along with the enhanced validation of the behavior of existing protocols.

The enhanced infrastructure of the simulator facilitates the development of new protocols and the performance verification in TCP/IP model of different Wired, Wireless networks such as IP network, Mobile Ad Hoc network (MANET), Wireless Sensor Network (WSN), and Vehicular Ad Hoc Network (VANET).

An appropriate startup support for installation is made available under various OS platforms to initiate the simulation in ns-2.

Experimenting a new protocol in ns-2 involves the following steps

  1. Modification of .cc and .h files according to protocol specification and setting default values in files located in lib folder
  2. Rebuilding ns-2 with the modified protocol
  3. Creating the scenario for the evaluation of protocol in terms of node configurations, network topology, communication events, and mobility models in .tcl fil
  4. Invocation and simulating new protocol in created scenario
  5. Observing the performance from the resultant files and animation using awk script and nam respectively
  6. Visualizing the plotted graph results through Xgraph

It is sufficient to follow the afore said steps from iii to vi for experimenting existing protocols in ns-2.

Tools and Technologies used in ns-2

  1. C++
  2. AWK
  3. OTCL
  4. NAM
  5. XGRAPH

 

Email – slogix.india@gmail.com

Visit – http://www.slogix.in

 

Tags: ,