RSS

Self-organizing Network Architectures and Protocols

The capacity of mobile ad hoc networks becomes a constraint due to the mutual interference of concurrent transmissions between nodes. The mobility of nodes adds another dimension of complexity in the communication process. Several works in ad hoc network studied the impact of mobility on the network capacity and suggested virtual backbone network to solve the issues in mobility management. The mobility model represents the The mobility models for mobile ad hoc networks include:

» Random walk
» Random waypoint
» Random direction mobility
» Reference point group mobility model
» Gauss-Markov
» Manhattan grid model
» Disaster area model
» Random street model

Solution in NS2
i) In NS2, mobility models such as Random way point, Random direction, Random walk and group mobility model can be modeled.
ii) The performance of the different routing protocols such as AODV, DSR and DSDV under different mobility models can be evaluated using NS2.
iii) Highly dynamic mobile adhoc network can be created using the above mentioned mobility models in order to evaluate the performance of the routing protocols under the dynamic nature of topology.
iv) Random way point mobility model can be generated using the ns~\indep-utils\cmu-scen-gen\setdest.cc file is available in ns2 for the input parameters total nodes, simulation time, pause time, network area and speed.

For Further Details Visit :  http://slogix.in/projects-in-mobile-ad-hoc-networks/index.html

 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

MAC issues in mobile ad hoc network

The media access control (MAC) is a data communication protocol and it is a sub-layer of the data link layer. It allows several nodes in the network to share the medium using the channel access control mechanisms. Collision in MAC layer is the major issue in wireless transmissions. Generally, two-way handshaking and four-way handshaking mechanism reduces the collision rate. In the two-way handshaking signal strategy, a node transmits the acknowledgement to the sender node on receiving the data packet. In the four-way handshaking signal strategy, the optimized MAC protocol uses Ready to Send/Clear to Send (RTS/CTS) technique to reduce the packet collision in wireless transmissions. The back-off algorithms also play a vital role in reducing the collision between nodes, especially if more than one node attempts to send data on the channel simultaneously. Improving the functionality of the back-off algorithms to estimate the optimal back-off waiting period is still a major issue. The MAC layer offers two classes of services, namely Distributed Coordination Function (DCF) and Point Coordination Function (PCF).

Solution in NS2

i) In NS2, the IEEE 802.11 MAC standard is applied to the network and the performance is evaluated.
ii) Four way handshaking is the default mechanism available in ns2.
iii) Performance under Two way handshaking mechanisms can be evaluated by disabling the RTS/CTS settings.
iv) Back off algorithm can be tested by varying inbuilt back-off variable and contention window size.
v) Channel access delay minimization and throughput increment can be illustrated using x-graph.
vi) Performance metrics such as frame overhead, contention overhead, delay, packet delivery ratio, dropped packets due to collision, throughput and energy consumption can be analyzed by processing the trace file using awk script.

For Further Details Visithttp://slogix.in/

 

How to Compile Hadoop Applications using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (Intelligent Java IDE) tool for building  the hadoop packages.

Steps to compile sample program in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Create hadoop application with IntelliJ IDEA

(i) Start IntelliJ IDEA.

(ii) Click Create new project

(iii) The project type is set to Java. Browse and select the Java SE  Development Kit 7 (JDK) installation folder on project SDK. Click Next.

(iv) Set the project name and project location and then click finish.

(v) In the Project Explorer, right-click the src folder. Select New –> Java Class.

3) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of the  screen. Select JARS or directories.

4) Run the project

(i) Select Run->Edit Configurations

(ii) Click on Application under “Run/Debug Configurations”

(iii) Select the Configuration tab then give the class name in “Main class” and set the input and output directory in “Program arguments” tags.

5) Compile the Hadoop Application

(i) Select Run-> Run

(ii) The outputs are generated in output directory with a file called _SUCCESS and part-r-00000.

For Further Details:

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

 
Leave a comment

Posted by on January 29, 2016 in Uncategorized

 

How to modify Hadoop Source Code using IntelliJ IDEA

The hadoop uses IntelliJ IDEA (intelligent Java IDE) tool for building  the hadoop packages.

Steps to modify and build hadoop source code in IntelliJ Idea

1) Downloading the IntelliJ IDEA tool from following link

https://www.jetbrains.com/idea/download/

2) Download the Hadoop along with source code

3) Import Hadoop project

(i) Start IntelliJ IDEA.

(ii) Click Import Project

(iii) Select the hadoop version folder and the click Next

(iv) Set the project name and project location on “Import Project”  wizard and then click Next.

(v) Select the Java SE Development Kit 7 (JDK) installation folder  on  project SDK. Click finish.

(vi) Now Hadoop is successfully imported in the IntelliJ IDEA

4) Configuring Module Dependencies and Libraries

(i) Select File->Project Structure.

(ii) Click on Modules under “Project Settings.”

(iii) Select the Dependencies tab then click on the + at the right of  the screen. Select JARS or directories.

5) Modify the existing module according to the requirement and  rebuild it

6) Integrate the modified module to existing hadoop.

7) Run the Hadoop application with modified hadoop source code

(i) start hadoop daemons

(ii) Run the sample program

 

For Further Details Visithttp://slogix.in/

 

 

 

 

Tags: , ,

Big Data

Big data comprises of large volume of datasets which is very difficult to manage within the traditional computer. The big data includes huge volume, high velocity and extended variety of data.

Hadoop is an open source framework written in java which is used to manage the large volume of datasets by the clusters of computers using the mapreduce concept. Hadoop MapReduce is a software framework in which the map collects the large input of data and converts into the sets of data whereas the reduction of these datasets are performed after the map process.

HDFS

The most common file system used in the hadoop is Hadoop Distributed File System(HDFS) and it follows the master/slave technique. HDFS is a fault-tolerant and performs the parallel processing. It is designed by using the low cost hardware. It stores the metadata and the application data separately. The meta data is stored on the dedicated server called Namenode which contains the file system. The application data is stored on the other server called the Datanodes which contains the actual data. All these servers are communicate with each other using the TCP based protocols.

Hbase

Hbase is the column based distributed database management system in which the data is stored in the form of columns in the tables whereas the traditional RDBMS stores the data in the form of rows. It provides the random quick access to the huge volume of data compared to the HDFS and real time read/write access to the big data. It stores the result in the form of hash tables.
  
Tools and Technologies

JDK 1.8.0
Netbeans IDE 8.0.1
Hadoop Distributed File System
Hbase-0.94.16
Mahout
Map reduce
Hadoop-1.2.1

 

For Further Details : http://slogix.in/projects-in-big-data/index.html

 

Best ns2 Projects – SLOGIX

The network simulator (ns-2) is a software, assisting the research and development process in network domain which require large scale experiments of newbies protocols those are tedious for checking in real time.
The ns-2 package is built in with both C++ and OTcl language. The core functionalities are written in C++ and the configurations are given in OTcl. The rebuilt feature provided in the existing ns2 package accommodates new protocols along with the enhanced validation of the behavior of existing protocols.

The enhanced infrastructure of the simulator facilitates the development of new protocols and the performance verification in TCP/IP model of different Wired, Wireless networks such as IP network, Mobile Ad Hoc network (MANET), Wireless Sensor Network (WSN), and Vehicular Ad Hoc Network (VANET).

An appropriate startup support for installation is made available under various OS platforms to initiate the simulation in ns-2.

Experimenting a new protocol in ns-2 involves the following steps

  1. Modification of .cc and .h files according to protocol specification and setting default values in files located in lib folder
  2. Rebuilding ns-2 with the modified protocol
  3. Creating the scenario for the evaluation of protocol in terms of node configurations, network topology, communication events, and mobility models in .tcl fil
  4. Invocation and simulating new protocol in created scenario
  5. Observing the performance from the resultant files and animation using awk script and nam respectively
  6. Visualizing the plotted graph results through Xgraph

It is sufficient to follow the afore said steps from iii to vi for experimenting existing protocols in ns-2.

Tools and Technologies used in ns-2

  1. C++
  2. AWK
  3. OTCL
  4. NAM
  5. XGRAPH

 

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

Visit – http://www.slogix.in

 

Tags: ,

What is Cloud Sim?

     CloudSim is a Simulation Tool or framework for implementing the Cloud Computing Environment. The CloudSim toolkit enables the simulation and experimentation of Cloud Computing systems. CloudSim library written in java, contains classes for creating the components such as  Datacenters, Hosts, Virtual Machines, applications, users etc.

     These components are used to simulate the new strategies in Cloud Computing domain. These components can be used to implement the various Scheduling Algorithms, Allocation Policies and Load Balancing Techniques. With the simulation results we can evaluate the efficiency of the newly implemented policies or strategies in Cloud environment. The CloudSim basic classes can be extended and one can add new scenarios for utilization. CloudSim requires that one should write a Java program using its components to compose the desired scenario.

The basic components in Cloudsim which will create the Cloud computing environment are:

1. Datacenter :    Datacenter, first component should be created, with an VmAllocation policy. The Hosts, and VMs are created inside the Datacenter only. The resource provisioning is performed based on the allocation policies.

2. DatacenterBroker :    A broker, communicates between the user and the datacenter. The  VM and Cloudlet requests given by the user are submitted to broker. The Broker will send the requests to datacenter. And also collects the result from the datacenter and sends it to the user.

3. Host:    The Host class is used to simulate a physical machine. It manages the VMs allocated in it.

4. Vm:    The Vm class is used to simulate the Virtual Machine which runs inside the Host and executes the applications or tasks.

5. Cloudlet:    The applications or tasks to be executed in Vm are simulated using Cloudlet class. The Class contains the basic application characteristics and runs inside the Vm.

6. VmAllocationPolicySimple:    It is the policy defined for allocating the Host for each Vm in the datacenter.

7. VmScheduler and CloudletScheduler :   It is the scheduling policy that defines the scheduling order of Vms and Cloudlets respectively.

Tools and Technology

  • Cloudsim 3.0.3
  • Java
  • Netbeans or Eclipse

For Further Details: http://slogix.in/cloud-computing-source-code/index.html

For details contact

S-Logix (OPC) Private Limited

Registered Office:

#5, First Floor, 4th Street

Dr. Subbarayan Nagar, Kodambakkam

Chennai-600 024, India

Landmark : ( Samiyar Madam)

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

Visit – http://www.slogix.in

 

Hbase

Hbase is an open source, column based distributed management system. It is a fault-tolerant and provides the quick recovery from the individual servers. It is built on the top of the hadoop / HDFS and the data stored on it is processed using the mapreduce capabilities.

Hbase consists of three components: HMaster, HRegionserver and HRegions. Hbase cluster consists of a master node called as HMaster,the multiple region server is called as HRegionserver. Each region server consists of the multiple regions is referred to as HRegions.
  
HMaster

HMaster acts as a Master server. It is responsible for monitoring each region server across the cluster and acts as an interface in the case of any changes in all metadata.The master runs on the namenode in a distributed server. The cluster consists of the number of master but only one master is activate at a time. Once the active master loses it lease in zookeeper then any one of the server in the cluster acts as a master and take care of the regionservers.

HRegionserver

HRegionserver plays a vital role in the regionserver implementation. Each regionserver is responsible for sharing and managing the regions i.e.,serving a set of regions. The HRegionserver runs on the datanode in a distributed cluster. One region can be served only by the regionserver.

HRegions

Regions are the subset of the table’s data. It is the basic element based on the availibility and distribution of rows and columns in the table. Hence the multiple regions in the Hbase is called as HRegions.
  
Tools and Technologies

1. JDK 1.8.0
2. Netbeans IDE 8.0.1
3. Hadoop Distributed File System
4. Hbase-0.94.16
5. Mahout
6. Map reduce
7. Hadoop-1.2.1

For further details:

S-Logix (OPC) Private Limited

    Registered Office:

    #5, First Floor, 4th Street

    Dr. Subbarayan Nagar, Kodambakkam

    Chennai-600 024, India

Research Projects :

Email – pro@slogix.in ,  Mobile : +91- 8124001111.

Ph.D Guidance & Consulting :

Email – phd@slogix.in , Mobile : +91- 9710999001.

 

Tags: , , ,

Android Projects

Best Android Research Projects

Android Projects

Android is a mobile operating system based on the Linux kernel developed by Google that delivers a complete software package for mobile devices. It makes use of a custom virtual machine for optimizing memory and hardware resources in a mobile environment. Its open nature has motivated a wide range of developers to use the open-source code as a base for innovative community-driven projects. Moreover, the open source code can be generously extended to include new technologies as they emerge. Android does not create a gap between the mobile’s core applications and third-party applications. It breaks the barrier to developing new and innovative applications. Android ensures fast and easy application development for touch-screen mobile devices such as smartphones and tablet. Though, it is primarily developed for tough-screen mobile devices, it has been used in games consoles, digital cameras, and other electronics. Several technology-based companies that demand a ready-made, low-cost and customizable operating system for high-tech devices replaced their operating system with android.

Android Platform

  • Android Software development Kit (SDK)
  • Android Development Tools (ADT) plugin
  • Android Debug Bridge

Android Tools

  • Android SDK Tools
  • Android Platform Tools
  • Eclipse
  • Android
  • Android Emulator

SLogix Research Projects Centre provide JAVA / J2EE / NS2 / PHP / Android  / Hadoop / Cloud Projects for  M.E. / M.Tech. / MCA students. SLogix Project Centre provide the best complete project listing with form design, source code, project report, database structure of live project, mini project, project guidance, short term courses and Inplant Training.

 

INPLANT TRAINING AND WORKSHOPS

We offer inplant training and workshop to the students which will be conducted at S-Logix as per your convenient timings.

INPLANT TRAININGS WILL BE CONDUCTED IN THE FOLLOWING DOMAINS

  • Java/J2EE 
  • Mobile Apps 
  • Cloud Computing 
  • Big Data 
  • Web Mining 
  • Web Services 
  • Ns2 Simulator
  • Android
  • Hadoop

Training ends with Participation Certificate and a Soft Copy of training materials.

 

Research Projects :Email – pro@slogix.in ,  Mobile : +91- 8124001111.

 Ph.D Guidance & Consulting :Email – phd@slogix.in , Mobile : +91- 9710999001.

Visit : http://slogix.in/

Read the rest of this entry »

 

Tags: ,