RSS

Author Archives: S-Logix

About S-Logix

S-Logix, a leading research and development company, offers key technology solutions in the field of information technology, computer science and wireless networks. The company has state-of-the-art research and development facilities to support advancement and the next generation of technology. The company has specialized in R&D solutions, research process outsourcing and research project consulting.

Self-organizing Network Architectures and Protocols

The capacity of mobile ad hoc networks becomes a constraint due to the mutual interference of concurrent transmissions between nodes. The mobility of nodes adds another dimension of complexity in the communication process. Several works in ad hoc network studied the impact of mobility on the network capacity and suggested virtual backbone network to solve the issues in mobility management. The mobility model represents the The mobility models for mobile ad hoc networks include:

» Random walk
» Random waypoint
» Random direction mobility
» Reference point group mobility model
» Gauss-Markov
» Manhattan grid model
» Disaster area model
» Random street model

Solution in NS2
i) In NS2, mobility models such as Random way point, Random direction, Random walk and group mobility model can be modeled.
ii) The performance of the different routing protocols such as AODV, DSR and DSDV under different mobility models can be evaluated using NS2.
iii) Highly dynamic mobile adhoc network can be created using the above mentioned mobility models in order to evaluate the performance of the routing protocols under the dynamic nature of topology.
iv) Random way point mobility model can be generated using the ns~\indep-utils\cmu-scen-gen\setdest.cc file is available in ns2 for the input parameters total nodes, simulation time, pause time, network area and speed.

For Further Details Visit :  http://slogix.in/projects-in-mobile-ad-hoc-networks/index.html

Advertisements
 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

MAC issues in mobile ad hoc network

The media access control (MAC) is a data communication protocol and it is a sub-layer of the data link layer. It allows several nodes in the network to share the medium using the channel access control mechanisms. Collision in MAC layer is the major issue in wireless transmissions. Generally, two-way handshaking and four-way handshaking mechanism reduces the collision rate. In the two-way handshaking signal strategy, a node transmits the acknowledgement to the sender node on receiving the data packet. In the four-way handshaking signal strategy, the optimized MAC protocol uses Ready to Send/Clear to Send (RTS/CTS) technique to reduce the packet collision in wireless transmissions. The back-off algorithms also play a vital role in reducing the collision between nodes, especially if more than one node attempts to send data on the channel simultaneously. Improving the functionality of the back-off algorithms to estimate the optimal back-off waiting period is still a major issue. The MAC layer offers two classes of services, namely Distributed Coordination Function (DCF) and Point Coordination Function (PCF).

Solution in NS2

i) In NS2, the IEEE 802.11 MAC standard is applied to the network and the performance is evaluated.
ii) Four way handshaking is the default mechanism available in ns2.
iii) Performance under Two way handshaking mechanisms can be evaluated by disabling the RTS/CTS settings.
iv) Back off algorithm can be tested by varying inbuilt back-off variable and contention window size.
v) Channel access delay minimization and throughput increment can be illustrated using x-graph.
vi) Performance metrics such as frame overhead, contention overhead, delay, packet delivery ratio, dropped packets due to collision, throughput and energy consumption can be analyzed by processing the trace file using awk script.

For Further Details Visithttp://slogix.in/

 

How to Compile Hadoop Applications using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (Intelligent Java IDE) tool for building  the hadoop packages.

Steps to compile sample program in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Create hadoop application with IntelliJ IDEA

(i) Start IntelliJ IDEA.

(ii) Click Create new project

(iii) The project type is set to Java. Browse and select the Java SE  Development Kit 7 (JDK) installation folder on project SDK. Click Next.

(iv) Set the project name and project location and then click finish.

(v) In the Project Explorer, right-click the src folder. Select New –> Java Class.

3) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of the  screen. Select JARS or directories.

4) Run the project

(i) Select Run->Edit Configurations

(ii) Click on Application under “Run/Debug Configurations”

(iii) Select the Configuration tab then give the class name in “Main class” and set the input and output directory in “Program arguments” tags.

5) Compile the Hadoop Application

(i) Select Run-> Run

(ii) The outputs are generated in output directory with a file called _SUCCESS and part-r-00000.

For Further Details:

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

How to modify Hadoop Source Code using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (intelligent Java IDE) tool for building  the hadoop packages.

Steps to modify and build hadoop source code in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Download the Hadoop along with source code

3) Import Hadoop project

(i) Start IntelliJ IDEA.

(ii) Click Import Project

(iii) Select the hadoop version folder and the click Next

(iv) Set the project name and project location on “Import Project”  wizard and then click Next.

(v) Select the Java SE Development Kit 7 (JDK) installation folder  on  project SDK. Click finish.

(vi) Now Hadoop is successfully imported in the IntelliJ IDEA

4) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of  the screen. Select JARS or directories.

5) Modify the existing module according to the requirement and  rebuild it

6) Integrate the modified module to existing hadoop.

7) Run the Hadoop application with modified hadoop source code

(i) start hadoop daemons

(ii) Run the sample program

 

For Further Details Visithttp://slogix.in/

 

 

 

 

Tags: , ,

Big Data

Big data comprises of large volume of datasets which is very difficult to manage within the traditional computer. The big data includes huge volume, high velocity and extended variety of data.

Hadoop is an open source framework written in java which is used to manage the large volume of datasets by the clusters of computers using the mapreduce concept. Hadoop MapReduce is a software framework in which the map collects the large input of data and converts into the sets of data whereas the reduction of these datasets are performed after the map process.

HDFS

The most common file system used in the hadoop is Hadoop Distributed File System(HDFS) and it follows the master/slave technique. HDFS is a fault-tolerant and performs the parallel processing. It is designed by using the low cost hardware. It stores the metadata and the application data separately. The meta data is stored on the dedicated server called Namenode which contains the file system. The application data is stored on the other server called the Datanodes which contains the actual data. All these servers are communicate with each other using the TCP based protocols.

Hbase

Hbase is the column based distributed database management system in which the data is stored in the form of columns in the tables whereas the traditional RDBMS stores the data in the form of rows. It provides the random quick access to the huge volume of data compared to the HDFS and real time read/write access to the big data. It stores the result in the form of hash tables.
  
Tools and Technologies

JDK 1.8.0
Netbeans IDE 8.0.1
Hadoop Distributed File System
Hbase-0.94.16
Mahout
Map reduce
Hadoop-1.2.1

 

For Further Details : http://slogix.in/projects-in-big-data/index.html

 

Best ns2 Projects – SLOGIX

The network simulator (ns-2) is a software, assisting the research and development process in network domain which require large scale experiments of newbies protocols those are tedious for checking in real time.
The ns-2 package is built in with both C++ and OTcl language. The core functionalities are written in C++ and the configurations are given in OTcl. The rebuilt feature provided in the existing ns2 package accommodates new protocols along with the enhanced validation of the behavior of existing protocols.

The enhanced infrastructure of the simulator facilitates the development of new protocols and the performance verification in TCP/IP model of different Wired, Wireless networks such as IP network, Mobile Ad Hoc network (MANET), Wireless Sensor Network (WSN), and Vehicular Ad Hoc Network (VANET).

An appropriate startup support for installation is made available under various OS platforms to initiate the simulation in ns-2.

Experimenting a new protocol in ns-2 involves the following steps

  1. Modification of .cc and .h files according to protocol specification and setting default values in files located in lib folder
  2. Rebuilding ns-2 with the modified protocol
  3. Creating the scenario for the evaluation of protocol in terms of node configurations, network topology, communication events, and mobility models in .tcl fil
  4. Invocation and simulating new protocol in created scenario
  5. Observing the performance from the resultant files and animation using awk script and nam respectively
  6. Visualizing the plotted graph results through Xgraph

It is sufficient to follow the afore said steps from iii to vi for experimenting existing protocols in ns-2.

Tools and Technologies used in ns-2

  1. C++
  2. AWK
  3. OTCL
  4. NAM
  5. XGRAPH

 

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

Visit – http://www.slogix.in

 

Tags: ,

What is Cloud Sim?

     CloudSim is a Simulation Tool or framework for implementing the Cloud Computing Environment. The CloudSim toolkit enables the simulation and experimentation of Cloud Computing systems. CloudSim library written in java, contains classes for creating the components such as  Datacenters, Hosts, Virtual Machines, applications, users etc.

     These components are used to simulate the new strategies in Cloud Computing domain. These components can be used to implement the various Scheduling Algorithms, Allocation Policies and Load Balancing Techniques. With the simulation results we can evaluate the efficiency of the newly implemented policies or strategies in Cloud environment. The CloudSim basic classes can be extended and one can add new scenarios for utilization. CloudSim requires that one should write a Java program using its components to compose the desired scenario.

The basic components in Cloudsim which will create the Cloud computing environment are:

1. Datacenter :    Datacenter, first component should be created, with an VmAllocation policy. The Hosts, and VMs are created inside the Datacenter only. The resource provisioning is performed based on the allocation policies.

2. DatacenterBroker :    A broker, communicates between the user and the datacenter. The  VM and Cloudlet requests given by the user are submitted to broker. The Broker will send the requests to datacenter. And also collects the result from the datacenter and sends it to the user.

3. Host:    The Host class is used to simulate a physical machine. It manages the VMs allocated in it.

4. Vm:    The Vm class is used to simulate the Virtual Machine which runs inside the Host and executes the applications or tasks.

5. Cloudlet:    The applications or tasks to be executed in Vm are simulated using Cloudlet class. The Class contains the basic application characteristics and runs inside the Vm.

6. VmAllocationPolicySimple:    It is the policy defined for allocating the Host for each Vm in the datacenter.

7. VmScheduler and CloudletScheduler :   It is the scheduling policy that defines the scheduling order of Vms and Cloudlets respectively.

Tools and Technology

  • Cloudsim 3.0.3
  • Java
  • Netbeans or Eclipse

For Further Details: http://slogix.in/cloud-computing-source-code/index.html

For details contact

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

Visit – http://www.slogix.in