Search results for: Bootstrap approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4996

Search results for: Bootstrap approach

676 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 927
675 Assessing the Effect of Grid Connection of Large-Scale Wind Farms on Power System Small-Signal Angular Stability

Authors: Wenjuan Du, Jingtian Bi, Tong Wang, Haifeng Wang

Abstract:

Grid connection of a large-scale wind farm affects power system small-signal angular stability in two aspects. Firstly, connection of the wind farm brings about the change of load flow and configuration of a power system. Secondly, the dynamic interaction is introduced by the wind farm with the synchronous generators (SGs) in the power system. This paper proposes a method to assess the two aspects of the effect of the wind farm on power system small-signal angular stability. The effect of the change of load flow/system configuration brought about by the wind farm can be examined separately by displacing wind farms with constant power sources, then the effect of the dynamic interaction of the wind farm with the SGs can be also computed individually. Thus, a clearer picture and better understanding on the power system small-signal angular stability as affected by grid connection of the large-scale wind farm are provided. In the paper, an example power system with grid connection of a wind farm is presented to demonstrate the proposed approach.

Keywords: power system small-signal angular stability, power system low-frequency oscillations, electromechanical oscillation modes, wind farms, double fed induction generator (DFIG)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
674 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow

Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary

Abstract:

An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.

Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2341
673 Supply Chain Resilience Triangle: The Study and Development of a Framework

Authors: M. Bevilacqua, F. E. Ciarapica, G. Marcucci

Abstract:

Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.

Keywords: Supply chain resilience, resilience definition, supply chain resilience triangle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642
672 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection

Authors: Mondher Yahyaoui

Abstract:

A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.

Keywords: Aileron deflection, camber-surface-bound vortices, classical VLM, Generalized VLM, flap deflection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5025
671 A Case Study to Observe How Students’ Perception of the Possibility of Success Impacts Their Performance in Summative Exams

Authors: Rochelle Elva

Abstract:

Faculty in Higher Education today are faced with the challenge of convincing their students of the importance of the mastery of skills through learning. This is because most students often have a single motivation -to get high grades. If it appears that this goal will not be met, they lose their motivation and their academic efforts wane. This is true even for students in the competitive fields of STEM, including Computer Science majors. As educators, we have to understand our students and leverage what motivates them, to achieve our learning outcomes. This paper presents a case study that utilizes cognitive psychology’s Expectancy-Value Theory and Motivation Theory, to investigate the effect of sustained expectancy for success on students’ learning outcomes. In our case study, we explore how students’ motivation and persistence in their academic efforts are impacted by providing them with an unexpected path to success, which continues to the end of the semester. The approach was tested in an undergraduate computer science course with n = 56. The results of the study indicate that when presented with the real possibility of success, despite existing low grades, both low and high-scoring students persisted in their efforts to improve their performance. Their final grades were on average one place higher on the +/-letter grade scale, with some students scoring as high as three places above their predicted grade.

Keywords: Expectancy for success and persistence, motivation and performance, computer science education, motivation and performance in computer science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 170
670 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1669
669 The Experiences of Coronary Heart Disease Patients: Biopsychosocial Perspective

Authors: Christopher C. Anyadubalu

Abstract:

Biological, psychological and social experiences and perceptions of healthcare services in patients medically diagnosed of coronary heart disease were investigated using a sample of 10 participants whose responses to the in-depth interview questions were analyzed based on inter-and-intra-case analyses. The results obtained revealed that advancing age, single status, divorce and/or death of spouse and the issue of single parenting negatively impacted patients- biopsychosocial experiences. The patients- experiences of physical signs and symptoms, anxiety and depression, past serious medical conditions, use of self-prescribed medications, family history of poor mental/medical or physical health, nutritional problems and insufficient physical activities heightened their risk of coronary attack. Collectivist culture served as a big source of relieve to the patients. Patients- temperament, experience of different chronic life stresses/challenges, mood alteration, regular drinking, smoking/gambling, and family/social impairments compounded their health situation. Patients were satisfied with the biomedical services rendered by the healthcare personnel, whereas their psychological and social needs were not attended to. Effective procedural treatment model, a holistic and multidimensional approach to the treatment of heart disease patients was proposed.

Keywords: Biopsychosocial, Coronary Heart Disease, Experience, Patients, Perception, Perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2585
668 The Optimal Placement of Capacitor in Order to Reduce Losses and the Profile of Distribution Network Voltage with GA, SA

Authors: Limouzade E., Joorabian M.

Abstract:

Most of the losses in a power system relate to the distribution sector which always has been considered. From the important factors which contribute to increase losses in the distribution system is the existence of radioactive flows. The most common way to compensate the radioactive power in the system is the power to use parallel capacitors. In addition to reducing the losses, the advantages of capacitor placement are the reduction of the losses in the release peak of network capacity and improving the voltage profile. The point which should be considered in capacitor placement is the optimal placement and specification of the amount of the capacitor in order to maximize the advantages of capacitor placement. In this paper, a new technique has been offered for the placement and the specification of the amount of the constant capacitors in the radius distribution network on the basis of Genetic Algorithm (GA). The existing optimal methods for capacitor placement are mostly including those which reduce the losses and voltage profile simultaneously. But the retaliation cost and load changes have not been considered as influential UN the target function .In this article, a holistic approach has been considered for the optimal response to this problem which includes all the parameters in the distribution network: The price of the phase voltage and load changes. So, a vast inquiry is required for all the possible responses. So, in this article, we use Genetic Algorithm (GA) as the most powerful method for optimal inquiry.

Keywords: Genetic Algorithm (GA), capacitor placement, voltage profile, network losses, Simulating Annealing (SA), distribution network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1507
667 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 674
666 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721
665 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 714
664 An Exploration of Sense of Place as Informative for Spatial Planning Guidelines: A Case Study of the Vredefort Dome World Heritage Site, South Africa

Authors: Karen Puren, Ernst Drewes, Vera Roos

Abstract:

This paper explores the sense of place in the Vredefort Dome World Heritage site, South Africa, as an essential input for the formulation of spatial planning proposals for the area. Intangible aspects such as personal and symbolic meanings of sites are currently not integrated in spatial planning in South Africa. This may have a detrimental effect on local inhabitants who have a long history with the site and built up a strong place identity. Involving local inhabitants at an early stage of the planning process and incorporating their attitudes and opinions in future intervention in the area, may also contribute to the acceptance of the legitimacy of future policy. An interdisciplinary and mixed-method research approach was followed in this study in order to identify possible ways to anchor spatial planning proposals in the identity of the place. In essence, the qualitative study revealed that inhabitants reflect a deep and personal relationship with and within the area, which contributes significantly to their sense of emotional security and selfidentity. Results include a strong conservation-orientated attitude with regard to the natural rural character of the site, especially in the inner core.

Keywords: Place identity, Sense of Place, Spatial Planning, Vredefort Dome World Heritage Site.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2529
663 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2187
662 Developing Leadership and Teamwork Skills of Pre-Service Teacher through Learning Camp

Authors: Sirimanee Banjong

Abstract:

This study aimed to 1) develop pre-service teachers’ leadership skills through camp-based learning, and 2) develop preservice teachers’ teamwork skills through camp-based learning. An applied research methodology was used. The target group was derived from a purposive selection. It involved 32 fourth-year students in Early Childhood Education Program enrolling a course entitled Seminar in Early Childhood Education provided during second semester of academic year 2013. The treatment was camp-based learning activities which applied a PDCA process including four stages: 1) plan, 2) do, 3) check, and 4) act. Research instruments were a learning camp program, a camp-based learning management plan, a 5-level assessment form for leadership skills and a 5-level assessment form for assessing teamwork skills. Data were analyzed using descriptive statistics. Results were: 1) pre-service teachers’ leadership skills yielded the before treatment average score at x= 3.4, S.D.=0.6 2and the after-treatment average score at x 4.29 , S.D.=0.66 pre-service teachers’ teamwork skills yielded the before-treatment average score at x=3.31, S.D.=0.60 and the after-treatment average score at x=4.42, S.D.=0.66 Both differences were statistically significant at the .05 level. Thus, the pre-service teachers’ leadership and teamwork skills were significantly improved through the camp-based learning approach.

Keywords: Learning camp, leadership skills, teamwork skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
661 MPPT Operation for PV Grid-connected System using RBFNN and Fuzzy Classification

Authors: A. Chaouachi, R. M. Kamel, K. Nagasaka

Abstract:

This paper presents a novel methodology for Maximum Power Point Tracking (MPPT) of a grid-connected 20 kW Photovoltaic (PV) system using neuro-fuzzy network. The proposed method predicts the reference PV voltage guarantying optimal power transfer between the PV generator and the main utility grid. The neuro-fuzzy network is composed of a fuzzy rule-based classifier and three Radial Basis Function Neural Networks (RBFNN). Inputs of the network (irradiance and temperature) are classified before they are fed into the appropriated RBFNN for either training or estimation process while the output is the reference voltage. The main advantage of the proposed methodology, comparing to a conventional single neural network-based approach, is the distinct generalization ability regarding to the nonlinear and dynamic behavior of a PV generator. In fact, the neuro-fuzzy network is a neural network based multi-model machine learning that defines a set of local models emulating the complex and non-linear behavior of a PV generator under a wide range of operating conditions. Simulation results under several rapid irradiance variations proved that the proposed MPPT method fulfilled the highest efficiency comparing to a conventional single neural network.

Keywords: MPPT, neuro-fuzzy, RBFN, grid-connected, photovoltaic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3154
660 A Comprehensive Survey on RAT Selection Algorithms for Heterogeneous Networks

Authors: Abdallah AL Sabbagh, Robin Braun, Mehran Abolhasan

Abstract:

Due to the coexistence of different Radio Access Technologies (RATs), Next Generation Wireless Networks (NGWN) are predicted to be heterogeneous in nature. The coexistence of different RATs requires a need for Common Radio Resource Management (CRRM) to support the provision of Quality of Service (QoS) and the efficient utilization of radio resources. RAT selection algorithms are part of the CRRM algorithms. Simply, their role is to verify if an incoming call will be suitable to fit into a heterogeneous wireless network, and to decide which of the available RATs is most suitable to fit the need of the incoming call and admit it. Guaranteeing the requirements of QoS for all accepted calls and at the same time being able to provide the most efficient utilization of the available radio resources is the goal of RAT selection algorithm. The normal call admission control algorithms are designed for homogeneous wireless networks and they do not provide a solution to fit a heterogeneous wireless network which represents the NGWN. Therefore, there is a need to develop RAT selection algorithm for heterogeneous wireless network. In this paper, we propose an approach for RAT selection which includes receiving different criteria, assessing and making decisions, then selecting the most suitable RAT for incoming calls. A comprehensive survey of different RAT selection algorithms for a heterogeneous wireless network is studied.

Keywords: Heterogeneous Wireless Network, RAT selection algorithms, Next Generation Wireless Network (NGWN), Beyond 3G Network, Common Radio Resource Management (CRRM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995
659 Transformations of Spatial Distributions of Bio-Polymers and Nanoparticles in Water Suspensions Induced by Resonance-Like Low Frequency Electrical Fields

Authors: A. A. Vasin, N. V. Klassen, A. M. Likhter

Abstract:

Water suspensions of in-organic (metals and oxides) and organic nano-objects (chitozan and collagen) were subjected to the treatment of direct and alternative electrical fields. In addition to quasi-periodical spatial patterning resonance-like performance of spatial distributions of these suspensions has been found at low frequencies of alternating electrical field. These resonances are explained as the result of creation of equilibrium states of groups of charged nano-objects with opposite signs of charges at the interparticle distances where the forces of Coulomb attraction are compensated by the repulsion forces induced by relatively negative polarization of hydrated regions surrounding the nanoparticles with respect to pure water. The low frequencies of these resonances are explained by comparatively big distances between the particles and their big masses with t\respect to masses of atoms constituting molecules with high resonance frequencies. These new resonances open a new approach to detailed modeling and understanding of mechanisms of the influence of electrical fields on the functioning of internal organs of living organisms at the level of cells and neurons.

Keywords: Bio-polymers, chitosan, collagen, nanoparticles, coulomb attraction, polarization repulsion, periodical patterning, electrical low frequency resonances, transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1100
658 Face Recognition Using Double Dimension Reduction

Authors: M. A Anjum, M. Y. Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction making the system computationally efficient with better recognition results. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results improve with increase in face image resolution and levels off when arriving at a certain resolution level. In the proposed model of face recognition, first image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to better computational speed and feature extraction potential of Discrete Cosine Transform (DCT) it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A trade of between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL database, Yale database and a color database. The proposed technique has performed much better compared to other techniques. The significance of the model is two fold: (1) dimension reduction up to an effective and suitable face image resolution (2) appropriate DCT coefficients are retained to achieve best recognition results with varying image poses, intensity and illumination level.

Keywords: Biometrics, DCT, Face Recognition, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1469
657 Applying Element Free Galerkin Method on Beam and Plate

Authors: Mahdad M’hamed, Belaidi Idir

Abstract:

This paper develops a meshless approach, called Element Free Galerkin (EFG) method, which is based on the weak form Moving Least Squares (MLS) of the partial differential governing equations and employs the interpolation to construct the meshless shape functions. The variation weak form is used in the EFG where the trial and test functions are approximated bye the MLS approximation. Since the shape functions constructed by this discretization have the weight function property based on the randomly distributed points, the essential boundary conditions can be implemented easily. The local weak form of the partial differential governing equations is obtained by the weighted residual method within the simple local quadrature domain. The spline function with high continuity is used as the weight function. The presently developed EFG method is a truly meshless method, as it does not require the mesh, either for the construction of the shape functions, or for the integration of the local weak form. Several numerical examples of two-dimensional static structural analysis are presented to illustrate the performance of the present EFG method. They show that the EFG method is highly efficient for the implementation and highly accurate for the computation. The present method is used to analyze the static deflection of beams and plate hole

Keywords: Numerical computation, element-free Galerkin, moving least squares, meshless methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2402
656 MATLAB-based System for Centralized Monitoring and Self Restoration against Fiber Fault in FTTH

Authors: Mohammad Syuhaimi Ab-Rahman, Boonchuan Ng, Kasmiran Jumari

Abstract:

This paper presented a MATLAB-based system named Smart Access Network Testing, Analyzing and Database (SANTAD), purposely for in-service transmission surveillance and self restoration against fiber fault in fiber-to-the-home (FTTH) access network. The developed program will be installed with optical line terminal (OLT) at central office (CO) to monitor the status and detect any fiber fault that occurs in FTTH downwardly from CO towards residential customer locations. SANTAD is interfaced with optical time domain reflectometer (OTDR) to accumulate every network testing result to be displayed on a single computer screen for further analysis. This program will identify and present the parameters of each optical fiber line such as the line's status either in working or nonworking condition, magnitude of decreasing at each point, failure location, and other details as shown in the OTDR's screen. The failure status will be delivered to field engineers for promptly actions, meanwhile the failure line will be diverted to protection line to ensure the traffic flow continuously. This approach has a bright prospect to improve the survivability and reliability as well as increase the efficiency and monitoring capabilities in FTTH.

Keywords: MATLAB, SANTAD, in-service transmission surveillance, self restoration, fiber fault, FTTH

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2091
655 Application of Systems Engineering Tools and Methods to Improve Healthcare Delivery Inside the Emergency Department of a Mid-Size Hospital

Authors: Mohamed Elshal, Hazim El-Mounayri, Omar El-Mounayri

Abstract:

Emergency department (ED) is considered as a complex system of interacting entities: patients, human resources, software and hardware systems, interfaces, and other systems. This paper represents a research for implementing a detailed Systems Engineering (SE) approach in a mid-size hospital in central Indiana. This methodology will be applied by “The Initiative for Product Lifecycle Innovation (IPLI)” institution at Indiana University to study and solve the crowding problem with the aim of increasing throughput of patients and enhance their treatment experience; therefore, the nature of crowding problem needs to be investigated with all other problems that leads to it. The presented SE methods are workflow analysis and systems modeling where SE tools such as Microsoft Visio are used to construct a group of system-level diagrams that demonstrate: patient’s workflow, documentation and communication flow, data systems, human resources workflow and requirements, leadership involved, and integration between ER different systems. Finally, the ultimate goal will be managing the process through implementation of an executable model using commercialized software tools, which will identify bottlenecks, improve documentation flow, and help make the process faster.

Keywords: Systems modeling, ED operation, workflow modeling, systems analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1020
654 Modeling and Analysis for Effective Capacity of a Cross-Layer Optimized Wireless Networks

Authors: Reham A. El-mayet, Hesham M. El-Badawy, Salwa H. Elramly

Abstract:

New generation mobile communication networks have the ability of supporting triple play. In order that, Orthogonal Frequency Division Multiplexing (OFDM) access techniques have been chosen to enlarge the system ability for high data rates networks. Many of cross-layer modeling and optimization schemes for Quality of Service (QoS) and capacity of downlink multiuser OFDM system were proposed. In this paper, the Maximum Weighted Capacity (MWC) based resource allocation at the Physical (PHY) layer is used. This resource allocation scheme provides a much better QoS than the previous resource allocation schemes, while maintaining the highest or nearly highest capacity and costing similar complexity. In addition, the Delay Satisfaction (DS) scheduling at the Medium Access Control (MAC) layer, which allows more than one connection to be served in each slot is used. This scheduling technique is more efficient than conventional scheduling to investigate both of the number of users as well as the number of subcarriers against system capacity. The system will be optimized for different operational environments: the outdoor deployment scenarios as well as the indoor deployment scenarios are investigated and also for different channel models. In addition, effective capacity approach [1] is used not only for providing QoS for different mobile users, but also to increase the total wireless network's throughput.

Keywords: Cross-layer, effective capacity, LTE, OFDM, QoS, resource allocation, wireless networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
653 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: Economic dispatch, Gaussian selection operator, prohibited operating zones, ramp rate limits, upgraded cuckoo search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 653
652 Material Analysis for Temple Painting Conservation in Taiwan

Authors: Chen-Fu Wang, Lin-Ya Kung

Abstract:

For traditional painting materials, the artisan used to combine the pigments with different binders to create colors. As time goes by, the materials used for painting evolved from natural to chemical materials. The vast variety of ingredients used in chemical materials has complicated restoration work; it makes conservation work more difficult. Conservation work also becomes harder when the materials cannot be easily identified; therefore, it is essential that we take a more scientific approach to assist in conservation work. Paintings materials are high molecular weight polymer, and their analysis is very complicated as well other contamination such as smoke and dirt can also interfere with the analysis of the material. The current methods of composition analysis of painting materials include Fourier transform infrared spectroscopy (FT-IR), mass spectrometer, Raman spectroscopy, X-ray diffraction spectroscopy (XRD), each of which has its own limitation. In this study, FT-IR was used to analyze the components of the paint coating. We have taken the most commonly seen materials as samples and deteriorated it. The aged information was then used for the database to exam the temple painting materials. By observing the FT-IR changes over time, we can tell all of the painting materials will be deteriorated by the UV light, but only the speed of its degradation had some difference. From the deterioration experiment, the acrylic resin resists better than the others. After collecting the painting materials aging information on FT-IR, we performed some test on the paintings on the temples. It was found that most of the artisan used tune-oil for painting materials, and some other paintings used chemical materials. This method is now working successfully on identifying the painting materials. However, the method is destructive and high cost. In the future, we will work on the how to know the painting materials more efficiently.

Keywords: Temple painting, painting material, conservation, FT-IR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1253
651 Evaluation of Deformable Boundary Condition Using Finite Element Method and Impact Test for Steel Tubes

Authors: Abed Ahmed, Mehrdad Asadi, Jennifer Martay

Abstract:

Stainless steel pipelines are crucial components to transportation and storage in the oil and gas industry. However, the rise of random attacks and vandalism on these pipes for their valuable transport has led to more security and protection for incoming surface impacts. These surface impacts can lead to large global deformations of the pipe and place the pipe under strain, causing the eventual failure of the pipeline. Therefore, understanding how these surface impact loads affect the pipes is vital to improving the pipes’ security and protection. In this study, experimental test and finite element analysis (FEA) have been carried out on EN3B stainless steel specimens to study the impact behaviour. Low velocity impact tests at 9 m/s with 16 kg dome impactor was used to simulate for high momentum impact for localised failure. FEA models of clamped and deformable boundaries were modelled to study the effect of the boundaries on the pipes impact behaviour on its impact resistance, using experimental and FEA approach. Comparison of experimental and FE simulation shows good correlation to the deformable boundaries in order to validate the robustness of the FE model to be implemented in pipe models with complex anisotropic structure.

Keywords: Dynamic impact, deformable boundary conditions, finite element modeling, FEM, finite element, FE, LS-DYNA, Stainless steel pipe.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 660
650 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector

Authors: H. Aldousari, T. Buchacher, N. M. Spyrou

Abstract:

Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.

Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
649 Maximum Common Substructure Extraction in RNA Secondary Structures Using Clique Detection Approach

Authors: Shih-Yi Chao

Abstract:

The similarity comparison of RNA secondary structures is important in studying the functions of RNAs. In recent years, most existing tools represent the secondary structures by tree-based presentation and calculate the similarity by tree alignment distance. Different to previous approaches, we propose a new method based on maximum clique detection algorithm to extract the maximum common structural elements in compared RNA secondary structures. A new graph-based similarity measurement and maximum common subgraph detection procedures for comparing purely RNA secondary structures is introduced. Given two RNA secondary structures, the proposed algorithm consists of a process to determine the score of the structural similarity, followed by comparing vertices labelling, the labelled edges and the exact degree of each vertex. The proposed algorithm also consists of a process to extract the common structural elements between compared secondary structures based on a proposed maximum clique detection of the problem. This graph-based model also can work with NC-IUB code to perform the pattern-based searching. Therefore, it can be used to identify functional RNA motifs from database or to extract common substructures between complex RNA secondary structures. We have proved the performance of this proposed algorithm by experimental results. It provides a new idea of comparing RNA secondary structures. This tool is helpful to those who are interested in structural bioinformatics.

Keywords: Clique detection, labeled vertices, RNA secondary structures, subgraph, similarity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1436
648 Seamless Multicast Handover in Fmipv6-Based Networks

Authors: Moneeb Gohar, Seok Joo Koh, Tae-Won Um, Hyun-Woo Lee

Abstract:

This paper proposes a fast tree join scheme to provide seamless multicast handover in the mobile networks based on the Fast Mobile IPv6 (FMIPv6). In the existing FMIPv6-based multicast handover scheme, the bi-directional tunnelling or the remote subscription is employed with the packet forwarding from the previous access router (AR) to the new AR. In general, the remote subscription approach is preferred to the bi-directional tunnelling one, since in the remote subscription scheme we can exploit an optimized multicast path from a multicast source to many mobile receivers. However, in the remote subscription scheme, if the tree joining operation takes a long time, the amount of data packets to be forwarded and buffered for multicast handover will increase, and thus the corresponding buffer may overflow, which results in severe packet losses. In order to reduce these costs associated with packet forwarding and buffering, this paper proposes the fast join to multicast tree, in which the new AR will join the multicast tree as fast as possible, so that the new multicast data packets can also arrive at the new AR, by which the packet forwarding and buffering costs can be reduced. From numerical analysis, it is shown that the proposed scheme can give better performance than the existing FMIPv6-based multicast handover schemes in terms of the multicast packet delivery costs.

Keywords: Mobile Multicast, FMIPv6, Seamless Handover, Fast Tree Join.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1408
647 Recycled Plastic Fibers for Minimizing Plastic Shrinkage Cracking of Cement Based Mortar

Authors: B.S. Al-Tulaian, M. J. Al-Shannag, A.M. Al-Hozaimy

Abstract:

The development of new construction materials using  recycled plastic is important to both the construction and the plastic  recycling industries. Manufacturing of fibers from industrial or  postconsumer plastic waste is an attractive approach with such  benefits as concrete performance enhancement, and reduced needs  for land filling. The main objective of this study is to investigate the  effect of Plastic fibers obtained locally from recycled waste on plastic  shrinkage cracking of ordinary cement based mortar. Parameters  investigated include: fiber length ranging from 20 to 50mm, and fiber  volume fraction ranging from 0% to 1.5% by volume. The test results  showed significant improvement in crack arresting mechanism and  substantial reduction in the surface area of cracks for the mortar  reinforced with recycled plastic fibers compared to plain mortar.  Furthermore, test results indicated that there was a slight decrease in  compressive strength of mortar reinforced with different lengths and  contents of recycled fibers compared to plain mortar. This study  suggests that adding more than 1% of RP fibers to mortar, can be  used effectively for controlling plastic shrinkage cracking of cement  based mortar, and thus results in waste reduction and resources  conservation.

 

Keywords: Mortar, plastic, shrinkage cracking, compressive strength, RF recycled fibers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3043