Sunday, January 17, 2010

Closeness Centrality and Epidemic Spreading in Networks

Abstract

This thesis is about the relation between the closeness centrality of the first infected node in the network and each of the total infection time that needs to infect all nodes in that network ,the infection rate for spreading epidemics in that network ,which measures the fraction of nodes those infected per unit time and the infection spreading power of that node ,that measures the power for each node to spread the epidemic to other uninfected nodes in that network .

In this thesis, I deal with four types of networks ,unweighted small and large networks and weighted small and large networks and study that relation in these four types.

The importance of this work is when we find the closeness centrality and the infection spreading power of any node that help us understand which weakness or advantages this node has for maintenance or blocking dangers at the right time .

In this work, I made some development in the SI model for the epidemic network in which most of authors consider the infection rate in that model assumed and constant. In this work I found that this infection rate is not constant but it depends on the closeness centrality of the first infected node in the network ,hence I suggest to replace the infection rate in the SI model by the closeness centrality of the first infected node in the network .

The results obtained from this work show that each of the total infection time, the infection rate and the infection spreading power when any node infected first in the network depend on the closeness centrality for that node .

Full Article

(A Novel Visual Basic Software for Projection Operator Calculations in Chemistry

Abstract In this work, we constructed a new Visual Basic 2005 program to solve mathematical equations (wave functions). These equations can be found by using so-called "Projection Operator Method". This method is used to construct σ- and π-SALCs for chemical molecules belonging to well known point group. For each point group a manual solution is discussed and shown. We then develop software that gives a computerized solution for each point group, which takes an incomparable time with a manual solution. The outputs of the software completely matched the manual solutions; which shows credibility of our software. The methodology followed in software construction is also shown here.

Full Article

(A New Visual Basic Software Built-up for Solving-out Reduction formula in Chemical Applications of Group Theory)

Abstract A need for a computer software to solve-out the "Reduction Formula" for different Point Groups is beyond doubt. That would save time and effort to many chemists who are involved in different aspects of chemical applications of group theory, and may gives a good approach to researchers dealing with Molecular Chemistry. This thesis presents a computer software that has been developed using Visual Basic 6.0 as a programming language. The input and output data are performed through software forms under Windows Vista environment. The software is able to perform the following functions: 1. Reducing Reducible Representations for 47 Point Groups. 2. Finding Reducible Representations " and " for Infinite Point Groups " C and D" and reducing them by S-L Method. 3. Finding Reducible Representation and reducing it for six chosen Point Groups " C, C, C, D, D and ". Solutions derived from the constructed software were tested by comparison with manual standard methods, and showed complete consistency.

Full Article

COST VALUE FUNCTION OF WATER DISTRIBUTION NETWORKS A Reliability-Based approach using MATLAB

Abstract

Every community on Earth ought to find the appropriate means to distribute water from different sources to a consumption centers. In general, water distribution networks (WDN) attain this. Each network is composed of arches (to deliver water) and nodes (to consume the water delivered). The ability of any WDN to satisfy the requirements at each node under normal and abnormal conditions is one of the dimensions of its reliability. A method was introduced in this work to evaluate the reliability of a WDN for several combinations of diameters.

A network solver was used to find the diameter combination successively. The results obtained from the network solver were saved in a text file. This file is then read by MATLAB, in order to do the necessary calculations. The system reliability for each diameter combination was computed. The values of the system reliability of each diameter combination are recorded as a vector in the MATLAB environment. The most important objective is the maximum of the system reliability of all these combinations, which has been achieved by MATLAB.

For this maximum reliability value, we have determined the corresponding values of the cost, where each one represents a diameter combination. The minimum of these values is then determined using a computer code that was developed within the method.

MATLAB was used to develop the computer program that converts the information into matrices, which make the required outcome easy to obtain and process.

A hypothetical case study was developed to demonstrate methodology implementation. The results were composed of two important things: The computed reliability of any WDN and the way to find the least cost design with a value of reliability that is over a minimum boundary value. Another important thing is all the alternatives of reliability values that can be achieved with a specific budget that we already have.

Full Article

Data Compression with Wavelets

Abstract There are two types of data compression; the first is lossless(exact) and the second is lossy (approximate). In lossless compression, all details are reserved but high compression ratios can not be achieved and this type is not considered in this thesis. The other type is the lossy compression where some details are lost in the process of compression. The size of the lost details is proportional with the the desired compression ratio which is controlled by the user. Using this type, high compression ratios can be achieved with acceptable resolution in the reconstructed data. In this thesis, a computational study of the classical Fourier transform and the relatively new wavelet transform is done. In addition, a computational comparison between the two major transforms shows that the wavelet transform is more efficient than the classical Fourier transform. The high compression ratios that can be achieved by wavelet transform lead to the introduction of several wavelet-based lossy data compression software. Examples of these are the image compressor JPEG2000 and the text compressor DJVU.

Full Article

Simulation in Queuing Models: Using Simulation at Beit-eba crossing check-point

Abstract

This thesis consider some queuing models to determine the measures of performance of a model. The most important measures are the waiting time in the queue and the size of the queue. The queue we are studying is the Beit-eba crossing check-point for both people arriving and people departing from Nablus City.

A comparison is made in order to determine the best fit model among two assumed models and the one under study(real one)using a suitable tool of simulation “SimQuick” which performs process simulation within the Excel spreadsheet environment.

To prove that we got good results for our study. We started by an assumed queue model and solve it analytically by the known formulas of queuing theory.

Next, we use simulation by "SimQuick" to compare results, which showed a good agreement between analytical solution and simulation.

The study showed that the single-channel queue is more efficient than the multiple-channels queues.

Full Article

for Scalability in Multicast Routingin The Suppression TechniqueAnalysis of The Logistic Distribution Use

Abstract

The immense growth of the computer-supported communication systems, especially the internet, made it imperative to design protocols that have to be efficient and scalable to support the work of the networks’ infrastructure. By scalable is meant the ability of the protocol to cope with the requirements of groups of the communicating processes when they grow very large in size.

The ever increasing demand on communication and the high capability of modern networks call continuously for efficient solutions to problems of communication. Among these solutions was the introduction of multicast routing and also the use of periodic unacknowledged messaging.

Related to these two solutions of the problem of scalability, certain techniques were used to overcome this problem, including the suppression technique.

This study deals with utilizing probabilistic distribution functions (pdfs) in the suppression technique with the aim of improvement of scalability of multicast routing in communication networks.

The two most employed distributions in the suppression techniques are the uniform and the exponential distributions, the first outperforms the second in the performance time metric, while the exponential excels in the performance metric of extra messages.

This study introduces a modified form of the logistic distribution as a candidate for use in the suppression technique and compares it with the two other above mentioned distributions. The MATLAB software was used in calculating the values of the performance metrics and in drawing the corresponding figures for comparing the results.

The logistic distribution was proved to excel or compete with the other two pdfs in time performance metrics and to have a comparable performance in the overhead metrics.

Full Article