Search results for: Modified SPIHT Algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4225

Search results for: Modified SPIHT Algorithm

445 A Comprehensive Review on Different Mixed Data Clustering Ensemble Methods

Authors: S. Sarumathi, N. Shanthi, S. Vidhya, M. Sharmila

Abstract:

An extensive amount of work has been done in data clustering research under the unsupervised learning technique in Data Mining during the past two decades. Moreover, several approaches and methods have been emerged focusing on clustering diverse data types, features of cluster models and similarity rates of clusters. However, none of the single clustering algorithm exemplifies its best nature in extracting efficient clusters. Consequently, in order to rectify this issue, a new challenging technique called Cluster Ensemble method was bloomed. This new approach tends to be the alternative method for the cluster analysis problem. The main objective of the Cluster Ensemble is to aggregate the diverse clustering solutions in such a way to attain accuracy and also to improve the eminence the individual clustering algorithms. Due to the massive and rapid development of new methods in the globe of data mining, it is highly mandatory to scrutinize a vital analysis of existing techniques and the future novelty. This paper shows the comparative analysis of different cluster ensemble methods along with their methodologies and salient features. Henceforth this unambiguous analysis will be very useful for the society of clustering experts and also helps in deciding the most appropriate one to resolve the problem in hand.

Keywords: Clustering, Cluster Ensemble Methods, Coassociation matrix, Consensus Function, Median Partition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
444 Soft-Sensor for Estimation of Gasoline Octane Number in Platforming Processes with Adaptive Neuro-Fuzzy Inference Systems (ANFIS)

Authors: Hamed.Vezvaei, Sepideh.Ordibeheshti, Mehdi.Ardjmand

Abstract:

Gasoline Octane Number is the standard measure of the anti-knock properties of a motor in platforming processes, that is one of the important unit operations for oil refineries and can be determined with online measurement or use CFR (Cooperative Fuel Research) engines. Online measurements of the Octane number can be done using direct octane number analyzers, that it is too expensive, so we have to find feasible analyzer, like ANFIS estimators. ANFIS is the systems that neural network incorporated in fuzzy systems, using data automatically by learning algorithms of NNs. ANFIS constructs an input-output mapping based both on human knowledge and on generated input-output data pairs. In this research, 31 industrial data sets are used (21 data for training and the rest of the data used for generalization). Results show that, according to this simulation, hybrid method training algorithm in ANFIS has good agreements between industrial data and simulated results.

Keywords: Adaptive Neuro-Fuzzy Inference Systems, GasolineOctane Number, Soft-sensor, Catalytic Naphtha Reforming

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
443 Cyber Security Enhancement via Software-Defined Pseudo-Random Private IP Address Hopping

Authors: Andre Slonopas, Warren Thompson, Zona Kostic

Abstract:

Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicates via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.

Keywords: Moving Target Defense, cybersecurity, network security, hopping randomization, software defined network, network security theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 603
442 Client Importance and Audit Quality under Civil Law versus Common Law Societies

Authors: Kelly Grani Yuen

Abstract:

Accounting scandals and auditing frauds are perceived to be driven by aggressive companies and misrepresentation of audit reports. However, local legal systems and law enforcements may affect the services auditors provide to their ‘important’ clients. Under the civil law and common law jurisdictions, the standard setters, the government, and the regulatory bodies treat cases differently. As such, whether or not different forms of legal systems and extent of law enforcement plays an important role in auditor’s Audit Quality is a question this paper attempts to explore. The paper focuses on the investigation in Asia, where Hong Kong represents the common-law jurisdiction, while Taiwan and China represent the civil law jurisdiction. Only the ten reputable accounting firms are used in this study due to the differences in rankings and establishments of some of the small local audit firms. This will also contribute to the data collected between the years 2007-2013. By focusing on the use of multiple regression based on the dependent (Audit Quality) and independent variables (Client Importance, Law Enforcement, and Press Freedom), six different models are established. Results demonstrate that since different jurisdictions have different legal systems and market regulations, auditor’s treatment on ‘important’ clients will vary. However, with the moderators in place (law enforcement and press freedom), the relationship between client importance and audit quality may be smoothed out. With that in mind, this study contributes to local governments and standard setters’ consideration on legal reform and proper law enforcement in the market. Perhaps, with such modifications on the economic systems, collusion between companies and auditors can finally be put to a halt.

Keywords: Audit quality, client importance, jurisdiction, modified audit opinions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1110
441 Perceptual JPEG Compliant Coding by Using DCT-Based Visibility Thresholds of Color Images

Authors: Kuo-Cheng Liu

Abstract:

Effective estimation of just noticeable distortion (JND) for images is helpful to increase the efficiency of a compression algorithm in which both the statistical redundancy and the perceptual redundancy should be accurately removed. In this paper, we design a DCT-based model for estimating JND profiles of color images. Based on a mathematical model of measuring the base detection threshold for each DCT coefficient in the color component of color images, the luminance masking adjustment, the contrast masking adjustment, and the cross masking adjustment are utilized for luminance component, and the variance-based masking adjustment based on the coefficient variation in the block is proposed for chrominance components. In order to verify the proposed model, the JND estimator is incorporated into the conventional JPEG coder to improve the compression performance. A subjective and fair viewing test is designed to evaluate the visual quality of the coding image under the specified viewing condition. The simulation results show that the JPEG coder integrated with the proposed DCT-based JND model gives better coding bit rates at visually lossless quality for a variety of color images.

Keywords: Just-noticeable distortion (JND), discrete cosine transform (DCT), JPEG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573
440 Detecting HCC Tumor in Three Phasic CT Liver Images with Optimization of Neural Network

Authors: Mahdieh Khalilinezhad, Silvana Dellepiane, Gianni Vernazza

Abstract:

The aim of this work is to build a model based on tissue characterization that is able to discriminate pathological and non-pathological regions from three-phasic CT images. With our research and based on a feature selection in different phases, we are trying to design a neural network system with an optimal neuron number in a hidden layer. Our approach consists of three steps: feature selection, feature reduction, and classification. For each region of interest (ROI), 6 distinct sets of texture features are extracted such as: first order histogram parameters, absolute gradient, run-length matrix, co-occurrence matrix, autoregressive model, and wavelet, for a total of 270 texture features. When analyzing more phases, we show that the injection of liquid cause changes to the high relevant features in each region. Our results demonstrate that for detecting HCC tumor phase 3 is the best one in most of the features that we apply to the classification algorithm. The percentage of detection between pathology and healthy classes, according to our method, relates to first order histogram parameters with accuracy of 85% in phase 1, 95% in phase 2, and 95% in phase 3.

Keywords: Feature selection, Multi-phasic liver images, Neural network, Texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2529
439 End-to-End Pyramid Based Method for MRI Reconstruction

Authors: Omer Cahana, Maya Herman, Ofer Levi

Abstract:

Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.

Keywords: Accelerate MRI scans, image reconstruction, pyramid network, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 321
438 Capturing an Unknown Moving Target in Unknown Territory using Vision and Coordination

Authors: Kiran Ijaz, Umar Manzoor, Arshad Ali Shahid

Abstract:

In this paper we present an extension to Vision Based LRTA* (VLRTA*) known as Vision Based Moving Target Search (VMTS) for capturing unknown moving target in unknown territory with randomly generated obstacles. Target position is unknown to the agents and they cannot predict its position using any probability method. Agents have omni directional vision but can see in one direction at some point in time. Agent-s vision will be blocked by the obstacles in the search space so agent can not see through the obstacles. Proposed algorithm is evaluated on large number of scenarios. Scenarios include grids of sizes from 10x10 to 100x100. Grids had obstacles randomly placed, occupying 0% to 50%, in increments of 10%, of the search space. Experiments used 2 to 9 agents for each randomly generated maze with same obstacle ratio. Observed results suggests that VMTS is effective in locate target time, solution quality and virtual target. In addition, VMTS becomes more efficient if the number of agents is increased with proportion to obstacle ratio.

Keywords: Vision, MTS, Unknown Target, Coordination, VMTS, Multi-Agent.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
437 A Model of Market Segmentation for the Customers of Mellat Bank in Iran

Authors: Nader Gharibnavaz, Hossein Yazdi

Abstract:

If organizations like Mellat Bank want to identify its customer market completely to reach its specified goals, it can segment the market to offer the product package to the right segment. Our objective is to offer a segmentation model for Iran banking market in Mellat bank view. The methodology of this project is combined by “segmentation on the basis of four part-quality variables" and “segmentation on the basis of different in means". Required data are gathered from E-Systems and researcher personal observation. Finally, the research offers the organization that at first step form a four dimensional matrix with 756 segments using four variables named value-based, behavioral, activity style, and activity level, and at the second step calculate the means of profit for every cell of matrix in two distinguished work level (levels α1:normal condition and α2: high pressure condition) and compare the segments by checking two conditions that are 1- homogeneity every segment with its sub segment and 2- heterogeneity with other segments, and so it can do the necessary segmentation process. After all, the last offer (more explained by an operational example and feedback algorithm) is to test and update the model because of dynamic environment, technology, and banking system.

Keywords: market segmentation model, banking system, Mellat bank

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3280
436 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems

Authors: Bassam Istanbouli

Abstract:

With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them.  In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies;  the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system.  Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.

Keywords: Blueprint, ERP, SDLC, Modular.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 387
435 Detection of Actuator Faults for an Attitude Control System using Neural Network

Authors: S. Montenegro, W. Hu

Abstract:

The objective of this paper is to develop a neural network-based residual generator to detect the fault in the actuators for a specific communication satellite in its attitude control system (ACS). First, a dynamic multilayer perceptron network with dynamic neurons is used, those neurons correspond a second order linear Infinite Impulse Response (IIR) filter and a nonlinear activation function with adjustable parameters. Second, the parameters from the network are adjusted to minimize a performance index specified by the output estimated error, with the given input-output data collected from the specific ACS. Then, the proposed dynamic neural network is trained and applied for detecting the faults injected to the wheel, which is the main actuator in the normal mode for the communication satellite. Then the performance and capabilities of the proposed network were tested and compared with a conventional model-based observer residual, showing the differences between these two methods, and indicating the benefit of the proposed algorithm to know the real status of the momentum wheel. Finally, the application of the methods in a satellite ground station is discussed.

Keywords: Satellite, Attitude Control, Momentum Wheel, Neural Network, Fault Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1986
434 Improved Automated Classification of Alcoholics and Non-alcoholics

Authors: Ramaswamy Palaniappan

Abstract:

In this paper, several improvements are proposed to previous work of automated classification of alcoholics and nonalcoholics. In the previous paper, multiplayer-perceptron neural network classifying energy of gamma band Visual Evoked Potential (VEP) signals gave the best classification performance using 800 VEP signals from 10 alcoholics and 10 non-alcoholics. Here, the dataset is extended to include 3560 VEP signals from 102 subjects: 62 alcoholics and 40 non-alcoholics. Three modifications are introduced to improve the classification performance: i) increasing the gamma band spectral range by increasing the pass-band width of the used filter ii) the use of Multiple Signal Classification algorithm to obtain the power of the dominant frequency in gamma band VEP signals as features and iii) the use of the simple but effective knearest neighbour classifier. To validate that these two modifications do give improved performance, a 10-fold cross validation classification (CVC) scheme is used. Repeat experiments of the previously used methodology for the extended dataset are performed here and improvement from 94.49% to 98.71% in maximum averaged CVC accuracy is obtained using the modifications. This latest results show that VEP based classification of alcoholics is worth exploring further for system development.

Keywords: Alcoholic, Multilayer-perceptron, Nearest neighbour, Gamma band, MUSIC, Visual evoked potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
433 Data Envelopment Analysis with Partially Perfect Objects

Authors: Alexander Y. Vaninsky

Abstract:

This paper presents a simplified version of Data Envelopment Analysis (DEA) - a conventional approach to evaluating the performance and ranking of competitive objects characterized by two groups of factors acting in opposite directions: inputs and outputs. DEA with a Perfect Object (DEA PO) augments the group of actual objects with a virtual Perfect Object - the one having greatest outputs and smallest inputs. It allows for obtaining an explicit analytical solution and making a step to an absolute efficiency. This paper develops this approach further and introduces a DEA model with Partially Perfect Objects. DEA PPO consecutively eliminates the smallest relative inputs or greatest relative outputs, and applies DEA PO to the reduced collections of indicators. The partial efficiency scores are combined to get the weighted efficiency score. The computational scheme remains simple, like that of DEA PO, but the advantage of the DEA PPO is taking into account all of the inputs and outputs for each actual object. Firm evaluation is considered as an example.

Keywords: Data Envelopment Analysis, Perfect object, Partially perfect object, Partial efficiency, Explicit solution, Simplified algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1692
432 An Artificial Neural Network Based Model for Predicting H2 Production Rates in a Sucrose-Based Bioreactor System

Authors: Nikhil, Bestamin Özkaya, Ari Visa, Chiu-Yue Lin, Jaakko A. Puhakka, Olli Yli-Harja

Abstract:

The performance of a sucrose-based H2 production in a completely stirred tank reactor (CSTR) was modeled by neural network back-propagation (BP) algorithm. The H2 production was monitored over a period of 450 days at 35±1 ºC. The proposed model predicts H2 production rates based on hydraulic retention time (HRT), recycle ratio, sucrose concentration and degradation, biomass concentrations, pH, alkalinity, oxidation-reduction potential (ORP), acids and alcohols concentrations. Artificial neural networks (ANNs) have an ability to capture non-linear information very efficiently. In this study, a predictive controller was proposed for management and operation of large scale H2-fermenting systems. The relevant control strategies can be activated by this method. BP based ANNs modeling results was very successful and an excellent match was obtained between the measured and the predicted rates. The efficient H2 production and system control can be provided by predictive control method combined with the robust BP based ANN modeling tool.

Keywords: Back-propagation, biohydrogen, bioprocessmodeling, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
431 Influence of Security on Fan Attendance during Nigeria Professional Football League Matches

Authors: B. O. Diyaolu

Abstract:

The stadium transcends a field of play to cultural heritage of a club especially when there is security of life and property and a conducive environment with exciting media facilities, CCTV and adequate field of play. Football fans love watching their clubs’ matches especially when nothing discourages their presence in the stadium. This study investigated the influence of security on fans’ attendance during Nigeria Professional Football League matches. Descriptive survey research design was used and the population consists of all Nigeria Professional Football League fans. Simple random sampling technique was used to pick a state from the six geo-political zones. 600 respondents comprising male and female fans were sampled from the ten selected vendors’ stands in each selected state. A structured questionnaire on Security and Fan attendance scale (SFAS) was used. The instrument consists of two sections. Section A seeks information on demographic data of the respondents, while section B was used to elicit information on security and fans’ attendance. The modified instrument which consists of 20 items has a reliability coefficient of 0.73. The hypothesis was tested at 0.05 significance level. The completed questionnaire was collated, coded and analyzed using descriptive statistics of frequency counts and percentage and inferential statistics of chi-square (X2). Findings of this study revealed that adequate security significantly influences fan attendance during Nigeria Professional Football League matches. There is no sport that can develop if the facilities in use are inadequate. Improving the condition of the stadium in Nigeria is paramount to the development of the Nigeria Professional Football League. All stakeholders in the organization of the League must put into consideration the need to improve the standard of the stadium as it will help to increase the attendance of fans during matches. Only the standard ones should be used during matches.

Keywords: Adequate Security, fans attendance, football fans, football stadium, Nigeria Professional Football League.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 559
430 Binarization of Text Region based on Fuzzy Clustering and Histogram Distribution in Signboards

Authors: Jonghyun Park, Toan Nguyen Dinh, Gueesang Lee

Abstract:

In this paper, we present a novel approach to accurately detect text regions including shop name in signboard images with complex background for mobile system applications. The proposed method is based on the combination of text detection using edge profile and region segmentation using fuzzy c-means method. In the first step, we perform an elaborate canny edge operator to extract all possible object edges. Then, edge profile analysis with vertical and horizontal direction is performed on these edge pixels to detect potential text region existing shop name in a signboard. The edge profile and geometrical characteristics of each object contour are carefully examined to construct candidate text regions and classify the main text region from background. Finally, the fuzzy c-means algorithm is performed to segment and detected binarize text region. Experimental results show that our proposed method is robust in text detection with respect to different character size and color and can provide reliable text binarization result.

Keywords: Text detection, edge profile, signboard image, fuzzy clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
429 A Framework for Scalable Autonomous P2P Resource Discovery for the Grid Implementation

Authors: Hesham A. Ali, Mofreh M. Salem, Ahmed A. Hamza

Abstract:

Recently, there have been considerable efforts towards the convergence between P2P and Grid computing in order to reach a solution that takes the best of both worlds by exploiting the advantages that each offers. Augmenting the peer-to-peer model to the services of the Grid promises to eliminate bottlenecks and ensure greater scalability, availability, and fault-tolerance. The Grid Information Service (GIS) directly influences quality of service for grid platforms. Most of the proposed solutions for decentralizing the GIS are based on completely flat overlays. The main contributions for this paper are: the investigation of a novel resource discovery framework for Grid implementations based on a hierarchy of structured peer-to-peer overlay networks, and introducing a discovery algorithm utilizing the proposed framework. Validation of the framework-s performance is done via simulation. Experimental results show that the proposed organization has the advantage of being scalable while providing fault-isolation, effective bandwidth utilization, and hierarchical access control. In addition, it will lead to a reliable, guaranteed sub-linear search which returns results within a bounded interval of time and with a smaller amount of generated traffic within each domain.

Keywords: Grid computing, grid information service, P2P, resource discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
428 Computer-Assisted Piston-Driven Ventilator for Total Liquid Breathing

Authors: Miguel A. Gómez, Enrique Hilario, Francisco J. Alvarez, Elena Gastiasoro, Antonia Alvarez, Jose A. Casla, Jorge Arguinchona, Juan L. Larrabe

Abstract:

Total liquid ventilation can support gas exchange in animal models of lung injury. Clinical application awaits further technical improvements and performance verification. Our aim was to develop a liquid ventilator, able to deliver accurate tidal volumes, and a computerized system for measuring lung mechanics. The computer-assisted, piston-driven respirator controlled ventilatory parameters that were displayed and modified on a real-time basis. Pressure and temperature transducers along with a lineal displacement controller provided the necessary signals to calculate lung mechanics. Ten newborn lambs (<6 days old) with respiratory failure induced by lung lavage, were monitored using the system. Electromechanical, hydraulic and data acquisition/analysis components of the ventilator were developed and tested in animals with respiratory failure. All pulmonary signals were collected synchronized in time, displayed in real-time, and archived on digital media. The total mean error (due to transducers, A/D conversion, amplifiers, etc.) was less than 5% compared to calibrated signals. Improvements in gas exchange and lung mechanics were observed during liquid ventilation, without impairment of cardiovascular profiles. The total liquid ventilator maintained accurate control of tidal volumes and the sequencing of inspiration/expiration. The computerized system demonstrated its ability to monitor in vivo lung mechanics, providing valuable data for early decision-making.

Keywords: Immature lamb, perfluorocarbon, pressure-limited, total liquid ventilation, ventilator, volume-controlled.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
427 A Comparative Analysis of Fuzzy, Neuro-Fuzzy and Fuzzy-GA Based Approaches for Software Reusability Evaluation

Authors: Parvinder Singh Sandhu, Dalwinder Singh Salaria, Hardeep Singh

Abstract:

Software Reusability is primary attribute of software quality. There are metrics for identifying the quality of reusable components but the function that makes use of these metrics to find reusability of software components is still not clear. These metrics if identified in the design phase or even in the coding phase can help us to reduce the rework by improving quality of reuse of the component and hence improve the productivity due to probabilistic increase in the reuse level. In this paper, we have devised the framework of metrics that uses McCabe-s Cyclometric Complexity Measure for Complexity measurement, Regularity Metric, Halstead Software Science Indicator for Volume indication, Reuse Frequency metric and Coupling Metric values of the software component as input attributes and calculated reusability of the software component. Here, comparative analysis of the fuzzy, Neuro-fuzzy and Fuzzy-GA approaches is performed to evaluate the reusability of software components and Fuzzy-GA results outperform the other used approaches. The developed reusability model has produced high precision results as expected by the human experts.

Keywords: Software Reusability, Software Metrics, Neural Networks, Genetic Algorithm, Fuzzy Logic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
426 A Particle Swarm Optimal Control Method for DC Motor by Considering Energy Consumption

Authors: Yingjie Zhang, Ming Li, Ying Zhang, Jing Zhang, Zuolei Hu

Abstract:

In the actual start-up process of DC motors, the DC drive system often faces a conflict between energy consumption and acceleration performance. To resolve the conflict, this paper proposes a comprehensive performance index that energy consumption index is added on the basis of classical control performance index in the DC motor starting process. Taking the comprehensive performance index as the cost function, particle swarm optimization algorithm is designed to optimize the comprehensive performance. Then it conducts simulations on the optimization of the comprehensive performance of the DC motor on condition that the weight coefficient of the energy consumption index should be properly designed. The simulation results show that as the weight of energy consumption increased, the energy efficiency was significantly improved at the expense of a slight sacrifice of fastness indicators with the comprehensive performance index method. The energy efficiency was increased from 63.18% to 68.48% and the response time reduced from 0.2875s to 0.1736s simultaneously compared with traditional proportion integrals differential controller in energy saving.

Keywords: Comprehensive performance index, energy consumption, acceleration performance, particle swarm optimal control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 628
425 New Approach for Minimizing Wavelength Fragmentation in Wavelength-Routed WDM Networks

Authors: Sami Baraketi, Jean-Marie Garcia, Olivier Brun

Abstract:

Wavelength Division Multiplexing (WDM) is the dominant transport technology used in numerous high capacity backbone networks, based on optical infrastructures. Given the importance of costs (CapEx and OpEx) associated to these networks, resource management is becoming increasingly important, especially how the optical circuits, called “lightpaths”, are routed throughout the network. This requires the use of efficient algorithms which provide routing strategies with the lowest cost. We focus on the lightpath routing and wavelength assignment problem, known as the RWA problem, while optimizing wavelength fragmentation over the network. Wavelength fragmentation poses a serious challenge for network operators since it leads to the misuse of the wavelength spectrum, and then to the refusal of new lightpath requests. In this paper, we first establish a new Integer Linear Program (ILP) for the problem based on a node-link formulation. This formulation is based on a multilayer approach where the original network is decomposed into several network layers, each corresponding to a wavelength. Furthermore, we propose an efficient heuristic for the problem based on a greedy algorithm followed by a post-treatment procedure. The obtained results show that the optimal solution is often reached. We also compare our results with those of other RWA heuristic methods

Keywords: WDM, lightpath, RWA, wavelength fragmentation, optimization, linear programming, heuristic

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865
424 Analysis of Three-Dimensional Longitudinal Rolls Induced by Double Diffusive Poiseuille-Rayleigh-Benard Flows in Rectangular Channels

Authors: O. Rahli, N. Mimouni, R. Bennacer, K. Bouhadef

Abstract:

This numerical study investigates the travelling wave’s appearance and the behavior of Poiseuille-Rayleigh-Benard (PRB) flow induced in 3D thermosolutale mixed convection (TSMC) in horizontal rectangular channels. The governing equations are discretized by using a control volume method with third order Quick scheme in approximating the advection terms. Simpler algorithm is used to handle coupling between the momentum and continuity equations. To avoid the excessively high computer time, full approximation storage (FAS) with full multigrid (FMG) method is used to solve the problem. For a broad range of dimensionless controlling parameters, the contribution of this work is to analyzing the flow regimes of the steady longitudinal thermoconvective rolls (noted R//) for both thermal and mass transfer (TSMC). The transition from the opposed volume forces to cooperating ones, considerably affects the birth and the development of the longitudinal rolls. The heat and mass transfers distribution are also examined.

Keywords: Heat and mass transfer, mixed convection, Poiseuille-Rayleigh-Benard flow, rectangular duct.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1083
423 Error Rate Performance Comparisons of Precoding Schemes over Fading Channels for Multiuser MIMO

Authors: M. Arulvizhi

Abstract:

In Multiuser MIMO communication systems, interuser interference has a strong impact on the transmitted signals. Precoding technique schemes are employed for multiuser broadcast channels to suppress an interuser interference. Different Linear and nonlinear precoding schemes are there. For the massive system dimension, it is difficult to design an appropriate precoding algorithm with low computational complexity and good error rate performance at the same time over fading channels. This paper describes the error rate performance of precoding schemes over fading channels with the assumption of perfect channel state information at the transmitter. To estimate the bit error rate performance, different propagation environments namely, Rayleigh, Rician and Nakagami fading channels have been offered. This paper presents the error rate performance comparison of these fading channels based on precoding methods like Channel Inversion and Dirty paper coding for multiuser broadcasting system. MATLAB simulation has been used. It is observed that multiuser system achieves better error rate performance by Dirty paper coding over Rayleigh fading channel.

Keywords: Multiuser MIMO, channel inversion precoding, dirty paper coding, fading channels, BER.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 712
422 Artificial Intelligent in Optimization of Steel Moment Frame Structures: A Review

Authors: Mohsen Soori, Fooad Karimi Ghaleh Jough

Abstract:

The integration of Artificial Intelligence (AI) techniques in the optimization of steel moment frame structures represents a transformative approach to enhance the design, analysis, and performance of these critical engineering systems. The review encompasses a wide spectrum of AI methods, including machine learning algorithms, evolutionary algorithms, neural networks, and optimization techniques, applied to address various challenges in the field. The synthesis of research findings highlights the interdisciplinary nature of AI applications in structural engineering, emphasizing the synergy between domain expertise and advanced computational methodologies. This synthesis aims to serve as a valuable resource for researchers, practitioners, and policymakers seeking a comprehensive understanding of the state-of-the-art in AI-driven optimization for steel moment frame structures. The paper commences with an overview of the fundamental principles governing steel moment frame structures and identifies the key optimization objectives, such as efficiency of structures. Subsequently, it delves into the application of AI in the conceptual design phase, where algorithms aid in generating innovative structural configurations and optimizing material utilization. The review also explores the use of AI for real-time structural health monitoring and predictive maintenance, contributing to the long-term sustainability and reliability of steel moment frame structures. Furthermore, the paper investigates how AI-driven algorithms facilitate the calibration of structural models, enabling accurate prediction of dynamic responses and seismic performance. Thus, by reviewing and analyzing the recent achievements in applications artificial intelligent in optimization of steel moment frame structures, the process of designing, analysis, and performance of the structures can be analyzed and modified.

Keywords: Artificial Intelligent, optimization process, steel moment frame, structural engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202
421 A Noble Flow Rate Control based on Leaky Bucket Method for Multi-Media OBS Networks

Authors: Kentaro Miyoko, Yoshihiko Mori, Yugo Ikeda, Yoshihiro Nishino, Yong-Bok Choi, Hiromi Okada

Abstract:

Optical burst switching (OBS) has been proposed to realize the next generation Internet based on the wavelength division multiplexing (WDM) network technologies. In the OBS, the burst contention is one of the major problems. The deflection routing has been designed for resolving the problem. However, the deflection routing becomes difficult to prevent from the burst contentions as the network load becomes high. In this paper, we introduce a flow rate control methods to reduce burst contentions. We propose new flow rate control methods based on the leaky bucket algorithm and deflection routing, i.e. separate leaky bucket deflection method, and dynamic leaky bucket deflection method. In proposed methods, edge nodes which generate data bursts carry out the flow rate control protocols. In order to verify the effectiveness of the flow rate control in OBS networks, we show that the proposed methods improve the network utilization and reduce the burst loss probability through computer simulations.

Keywords: Optical burst switching, OBS, flow rate control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1703
420 An Inverse Heat Transfer Algorithm for Predicting the Thermal Properties of Tumors during Cryosurgery

Authors: Mohamed Hafid, Marcel Lacroix

Abstract:

This study aimed at developing an inverse heat transfer approach for predicting the time-varying freezing front and the temperature distribution of tumors during cryosurgery. Using a temperature probe pressed against the layer of tumor, the inverse approach is able to predict simultaneously the metabolic heat generation and the blood perfusion rate of the tumor. Once these parameters are predicted, the temperature-field and time-varying freezing fronts are determined with the direct model. The direct model rests on one-dimensional Pennes bioheat equation. The phase change problem is handled with the enthalpy method. The Levenberg-Marquardt Method (LMM) combined to the Broyden Method (BM) is used to solve the inverse model. The effect (a) of the thermal properties of the diseased tissues; (b) of the initial guesses for the unknown thermal properties; (c) of the data capture frequency; and (d) of the noise on the recorded temperatures is examined. It is shown that the proposed inverse approach remains accurate for all the cases investigated.

Keywords: Cryosurgery, inverse heat transfer, Levenberg-Marquardt method, thermal properties, Pennes model, enthalpy method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1486
419 Optimized Energy Scheduling Algorithm for Energy Efficient Wireless Sensor Networks

Authors: S. Arun Rajan, S. Bhavani

Abstract:

Wireless sensor networks can be tiny, low cost, intelligent sensors connected with advanced communication systems. WSNs have pulled in significant consideration as a matter of fact that, industrial as well as medical solicitations employ these in monitoring targets, conservational observation, obstacle exposure, movement regulator etc. In these applications, sensor hubs are thickly sent in the unattended environment with little non-rechargeable batteries. This constraint requires energy-efficient systems to drag out the system lifetime. There are redundancies in data sent over the network. To overcome this, multiple virtual spine scheduling has been presented. Such networks problems are called Maximum Lifetime Backbone Scheduling (MLBS) problems. Though this sleep wake cycle reduces radio usage, improvement can be made in the path in which the group heads stay selected. Cluster head selection with emphasis on geometrical relation of the system will enhance the load sharing among the nodes. Also the data are analyzed to reduce redundant transmission. Multi-hop communication will facilitate lighter loads on the network.

Keywords: WSN, wireless sensor networks, MLBS, maximum lifetime backbone scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 872
418 Influence of Fiber Packing on Transverse Plastic Properties of Metal Matrix Composites

Authors: Mohammad Tahaye Abadi

Abstract:

The present paper concerns with the influence of fiber packing on the transverse plastic properties of metal matrix composites. A micromechanical modeling procedure is used to predict the effective mechanical properties of composite materials at large tensile and compressive deformations. Microstructure is represented by a repeating unit cell (RUC). Two fiber arrays are considered including ideal square fiber packing and random fiber packing defined by random sequential algorithm. The micromechanical modeling procedure is implemented for graphite/aluminum metal matrix composite in which the reinforcement behaves as elastic, isotropic solids and the matrix is modeled as an isotropic elastic-plastic solid following the von Mises criterion with isotropic hardening and the Ramberg-Osgood relationship between equivalent true stress and logarithmic strain. The deformation is increased to a considerable value to evaluate both elastic and plastic behaviors of metal matrix composites. The yields strength and true elastic-plastic stress are determined for graphite/aluminum composites.

Keywords: Fiber packing, metal matrix composites, micromechanics, plastic deformation, random

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634
417 Massively-Parallel Bit-Serial Neural Networks for Fast Epilepsy Diagnosis: A Feasibility Study

Authors: Si Mon Kueh, Tom J. Kazmierski

Abstract:

There are about 1% of the world population suffering from the hidden disability known as epilepsy and major developing countries are not fully equipped to counter this problem. In order to reduce the inconvenience and danger of epilepsy, different methods have been researched by using a artificial neural network (ANN) classification to distinguish epileptic waveforms from normal brain waveforms. This paper outlines the aim of achieving massive ANN parallelization through a dedicated hardware using bit-serial processing. The design of this bit-serial Neural Processing Element (NPE) is presented which implements the functionality of a complete neuron using variable accuracy. The proposed design has been tested taking into consideration non-idealities of a hardware ANN. The NPE consists of a bit-serial multiplier which uses only 16 logic elements on an Altera Cyclone IV FPGA and a bit-serial ALU as well as a look-up table. Arrays of NPEs can be driven by a single controller which executes the neural processing algorithm. In conclusion, the proposed compact NPE design allows the construction of complex hardware ANNs that can be implemented in a portable equipment that suits the needs of a single epileptic patient in his or her daily activities to predict the occurrences of impending tonic conic seizures.

Keywords: Artificial Neural Networks, bit-serial neural processor, FPGA, Neural Processing Element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1564
416 Modelling a Hospital as a Queueing Network: Analysis for Improving Performance

Authors: Emad Alenany, M. Adel El-Baz

Abstract:

In this paper, the flow of different classes of patients into a hospital is modelled and analyzed by using the queueing network analyzer (QNA) algorithm and discrete event simulation. Input data for QNA are the rate and variability parameters of the arrival and service times in addition to the number of servers in each facility. Patient flows mostly match real flow for a hospital in Egypt. Based on the analysis of the waiting times, two approaches are suggested for improving performance: Separating patients into service groups, and adopting different service policies for sequencing patients through hospital units. The separation of a specific group of patients, with higher performance target, to be served separately from the rest of patients requiring lower performance target, requires the same capacity while improves performance for the selected group of patients with higher target. Besides, it is shown that adopting the shortest processing time and shortest remaining processing time service policies among other tested policies would results in, respectively, 11.47% and 13.75% reduction in average waiting time relative to first come first served policy.

Keywords: Queueing network, discrete-event simulation, health applications, SPT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1520