Search results for: transfer matrix approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6872

Search results for: transfer matrix approach

662 Effect of Progressive Type-I Right Censoring on Bayesian Statistical Inference of Simple Step–Stress Acceleration Life Testing Plan under Weibull Life Distribution

Authors: Saleem Z. Ramadan

Abstract:

This paper discusses the effects of using progressive Type-I right censoring on the design of the Simple Step Accelerated Life testing using Bayesian approach for Weibull life products under the assumption of cumulative exposure model. The optimization criterion used in this paper is to minimize the expected pre-posterior variance of the Pth percentile time of failures. The model variables are the stress changing time and the stress value for the first step. A comparison between the conventional and the progressive Type-I right censoring is provided. The results have shown that the progressive Type-I right censoring reduces the cost of testing on the expense of the test precision when the sample size is small. Moreover, the results have shown that using strong priors or large sample size reduces the sensitivity of the test precision to the censoring proportion. Hence, the progressive Type-I right censoring is recommended in these cases as progressive Type-I right censoring reduces the cost of the test and doesn't affect the precision of the test a lot. Moreover, the results have shown that using direct or indirect priors affects the precision of the test.

Keywords: Reliability, Accelerated life testing, Cumulative exposure model, Bayesian estimation, Progressive Type-I censoring, Weibull distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2160
661 Indicators as Early Warning Signal Performance to Solve Underlying Safety Problem before They Emerge as Accident Risks

Authors: Benson Chizubem

Abstract:

Because of the severe hazards that substantially impact workers' lives and assets lost, the oil and gas industry has established a goal of establishing zero occurrences or accidents in operations. Using leading indicators to measure and assess an organization's safety performance is a proactive approach to safety management. Also, it will provide early warning signals to solve inherent safety issues before they lead to an accident in the study industry. The analysis of these indicators' performance was based on a questionnaire-based methodology. A total number of 1000 questionnaires were disseminated to the workers, of which 327 were returned to the researcher team. The data collected were analysed to evaluate their safety perceptions on indicators performance. Data analysis identified safety training, safety system, safety supervision, safety rules and procedures, safety auditing, strategies and policies, management commitment, safety meeting and safety behaviour, as potential leading indicators that are capable of measuring organizational safety performance and as capable of providing early warning signals of weak safety area in an operational environment. The findings of this study have provided safety researchers and industrial safety practitioners with helpful information on the improvement of the existing safety monitoring process in the oil and gas industry, both locally and globally, as proactive actions.

Keywords: Early warning, safety, accident risks, oil and gas industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 372
660 Real-Time Recognition of Dynamic Hand Postures on a Neuromorphic System

Authors: Qian Liu, Steve Furber

Abstract:

To explore how the brain may recognise objects in its general,accurate and energy-efficient manner, this paper proposes the use of a neuromorphic hardware system formed from a Dynamic Video Sensor (DVS) silicon retina in concert with the SpiNNaker real-time Spiking Neural Network (SNN) simulator. As a first step in the exploration on this platform a recognition system for dynamic hand postures is developed, enabling the study of the methods used in the visual pathways of the brain. Inspired by the behaviours of the primary visual cortex, Convolutional Neural Networks (CNNs) are modelled using both linear perceptrons and spiking Leaky Integrate-and-Fire (LIF) neurons. In this study’s largest configuration using these approaches, a network of 74,210 neurons and 15,216,512 synapses is created and operated in real-time using 290 SpiNNaker processor cores in parallel and with 93.0% accuracy. A smaller network using only 1/10th of the resources is also created, again operating in real-time, and it is able to recognise the postures with an accuracy of around 86.4% - only 6.6% lower than the much larger system. The recognition rate of the smaller network developed on this neuromorphic system is sufficient for a successful hand posture recognition system, and demonstrates a much improved cost to performance trade-off in its approach.

Keywords: Spiking neural network (SNN), convolutional neural network (CNN), posture recognition, neuromorphic system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2053
659 Optimization of Kinematics for Birds and UAVs Using Evolutionary Algorithms

Authors: Mohamed Hamdaoui, Jean-Baptiste Mouret, Stephane Doncieux, Pierre Sagaut

Abstract:

The aim of this work is to present a multi-objective optimization method to find maximum efficiency kinematics for a flapping wing unmanned aerial vehicle. We restrained our study to rectangular wings with the same profile along the span and to harmonic dihedral motion. It is assumed that the birdlike aerial vehicle (whose span and surface area were fixed respectively to 1m and 0.15m2) is in horizontal mechanically balanced motion at fixed speed. We used two flight physics models to describe the vehicle aerodynamic performances, namely DeLaurier-s model, which has been used in many studies dealing with flapping wings, and the model proposed by Dae-Kwan et al. Then, a constrained multi-objective optimization of the propulsive efficiency is performed using a recent evolutionary multi-objective algorithm called є-MOEA. Firstly, we show that feasible solutions (i.e. solutions that fulfil the imposed constraints) can be obtained using Dae-Kwan et al.-s model. Secondly, we highlight that a single objective optimization approach (weighted sum method for example) can also give optimal solutions as good as the multi-objective one which nevertheless offers the advantage of directly generating the set of the best trade-offs. Finally, we show that the DeLaurier-s model does not yield feasible solutions.

Keywords: Flight physics, evolutionary algorithm, optimization, Pareto surface.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1646
658 Optimum Locations for Intercity Bus Terminals with the AHP Approach – Case Study of the City of Esfahan

Authors: Mehrdad Arabi, Ehsan Beheshtitabar, Bahador Ghadirifaraz, Behrooz Forjanizadeh

Abstract:

Interaction between human, location and activity defines space. In the framework of these relations, space is a container for current specifications in relations of the 3 mentioned elements. The change of land utility considered with average performance range, urban regulations, society requirements etc. will provide welfare and comfort for citizens. From an engineering view it is fundamental that choosing a proper location for a specific civil activity requires evaluation of locations from different perspectives. The debate of desirable establishment of municipal service elements in urban regions is one of the most important issues related to urban planning. In this paper, the research type is applicable based on goal, and is descriptive and analytical based on nature. Initially existing terminals in Esfahan are surveyed and then new locations are presented based on evaluated criteria. In order to evaluate terminals based on the considered factors, an AHP model is used at first to estimate weight of different factors and then existing and suggested locations are evaluated using Arc GIS software and AHP model results. The results show that existing bus terminals are located in fairly proper locations. Further results of this study suggest new locations to establish terminals based on urban criteria.

Keywords: Arc GIS, Esfahan city, Optimum locations, Terminals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2504
657 Addressing Scalability Issues of Named Entity Recognition Using Multi-Class Support Vector Machines

Authors: Mona Soliman Habib

Abstract:

This paper explores the scalability issues associated with solving the Named Entity Recognition (NER) problem using Support Vector Machines (SVM) and high-dimensional features. The performance results of a set of experiments conducted using binary and multi-class SVM with increasing training data sizes are examined. The NER domain chosen for these experiments is the biomedical publications domain, especially selected due to its importance and inherent challenges. A simple machine learning approach is used that eliminates prior language knowledge such as part-of-speech or noun phrase tagging thereby allowing for its applicability across languages. No domain-specific knowledge is included. The accuracy measures achieved are comparable to those obtained using more complex approaches, which constitutes a motivation to investigate ways to improve the scalability of multiclass SVM in order to make the solution more practical and useable. Improving training time of multi-class SVM would make support vector machines a more viable and practical machine learning solution for real-world problems with large datasets. An initial prototype results in great improvement of the training time at the expense of memory requirements.

Keywords: Named entity recognition, support vector machines, language independence, bioinformatics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690
656 Turbulent Mixing and its Effects on Thermal Fatigue in Nuclear Reactors

Authors: Eggertson, E.C. Kapulla, R, Fokken, J, Prasser, H.M.

Abstract:

The turbulent mixing of coolant streams of different temperature and density can cause severe temperature fluctuations in piping systems in nuclear reactors. In certain periodic contraction cycles these conditions lead to thermal fatigue. The resulting aging effect prompts investigation in how the mixing of flows over a sharp temperature/density interface evolves. To study the fundamental turbulent mixing phenomena in the presence of density gradients, isokinetic (shear-free) mixing experiments are performed in a square channel with Reynolds numbers ranging from 2-500 to 60-000. Sucrose is used to create the density difference. A Wire Mesh Sensor (WMS) is used to determine the concentration map of the flow in the cross section. The mean interface width as a function of velocity, density difference and distance from the mixing point are analyzed based on traditional methods chosen for the purposes of atmospheric/oceanic stratification analyses. A definition of the mixing layer thickness more appropriate to thermal fatigue and based on mixedness is devised. This definition shows that the thermal fatigue risk assessed using simple mixing layer growth can be misleading and why an approach that separates the effects of large scale (turbulent) and small scale (molecular) mixing is necessary.

Keywords: Concentration measurements, Mixedness, Stablystratified turbulent isokinetic mixing layer, Wire mesh sensor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
655 A Simulated Environment Approach to Investigate the Effect of Adversarial Perturbations on Traffic Sign for Automotive Software-in-Loop Testing

Authors: Sunil Patel, Pallab Maji

Abstract:

To study the effect of adversarial attack environment must be controlled. Autonomous driving includes mainly 5 phases sense, perceive, map, plan, and drive. Autonomous vehicles sense their surrounding with the help of different sensors like cameras, radars, and lidars. Deep learning techniques are considered Blackbox and found to be vulnerable to adversarial attacks. In this research, we study the effect of the various known adversarial attacks with the help of the Unreal Engine-based, high-fidelity, real-time raytraced simulated environment. The goal of this experiment is to find out if adversarial attacks work in moving vehicles and if an unknown network may be targeted. We discovered that the existing Blackbox and Whitebox attacks have varying effects on different traffic signs. We observed that attacks that impair detection in static scenarios do not have the same effect on moving vehicles. It was found that some adversarial attacks with hardly noticeable perturbations entirely blocked the recognition of certain traffic signs. We observed that the daylight condition has a substantial impact on the model's performance by simulating the interplay of light on traffic signs. Our findings have been found to closely resemble outcomes encountered in the real world.

Keywords: Adversarial attack simulation, computer simulation, ray-traced environment, realistic simulation, unreal engine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433
654 Application of Acinetobacter sp. KKU44 for Cellulase Production from Agricultural Waste

Authors: Surasak Siripornadulsil, Nutt Poomai, Wilailak Siripornadulsil

Abstract:

Due to a high ethanol demand, the approach for  effective ethanol production is important and has been developed  rapidly worldwide. Several agricultural wastes are highly  abundant in celluloses and the effective cellulase enzymes do exist  widely among microorganisms. Accordingly, the cellulose  degradation using microbial cellulase to produce a low-cost substrate  for ethanol production has attracted more attention. In this  study, the cellulase producing bacterial strain has been isolated  from rich straw and identified by 16S rDNA sequence analysis as Acinetobacter sp. KKU44. This strain is able to grow and exhibit the cellulase activity. The optimal temperature for its growth and  cellulase production is 37°C. The optimal temperature of bacterial  cellulase activity is 60°C. The cellulase enzyme from  Acinetobacter sp. KKU44 is heat-tolerant enzyme. The bacterial culture of 36h. showed highest cellulase activity at 120U/mL when  grown in LB medium containing 2% (w/v). The capability of  Acinetobacter sp. KKU44 to grow in cellulosic agricultural wastes as a sole carbon source and exhibiting the high cellulase activity at high temperature suggested that this strain could be potentially developed further as a cellulose degrading strain for a production of low-cost substrate used in ethanol production. 

 

Keywords: Acinetobacter sp. KKU44, bagasse, cellulase enzyme, rice husk.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2684
653 Teaching Math to Preschool Children with Autism

Authors: Hui Fang Huang Su, Jia Borror

Abstract:

This study compared two different interventions for math instruction among preschoolers with autism spectrum disorder (ASD). The first intervention, a combination of discrete trial teaching and Strategies for Teaching Based on Autism Research (STAR), was the regular math curriculum utilized at the preschool. The second activity-based, naturalistic intervention was Project Mind, also known as Math is Not Difficult. The curricular interventions were randomly assigned to four preschool classrooms with ASD students and implemented over three months for Project MIND. Measurements gained during the same three months for the STAR intervention were used. A quasi-experimental, pre-test/post-test design was selected to compare which intervention was the most effective in increasing mathematical knowledge and skills among preschoolers with ASD. Standardized pre and post-test instruments included the Bracken Basic Concept Scale-3 Receptive, the Applied Problems and Calculation subtests of the Woodcock-Johnson IV Tests of Achievement, and the TEMA 3: Test of Early Mathematics Ability – Third Edition. The STAR assessment is typically administered to all preschoolers at the study site three times per year, and those results were used in this study. We anticipated that the implementation of these two approaches would lead to improvement in the mathematical knowledge and skills of children with ASD. Still, it is essential to see whether a behavioral or naturalistic teaching approach leads to more significant results.

Keywords: Autism, mathematics, preschool, special education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 888
652 Applying the Extreme-Based Teaching Model in Post-Secondary Online Classroom Setting: A Field Experiment

Authors: Leon Pan

Abstract:

The first programming course within post-secondary education has long been recognized as a challenging endeavor for both educators and students alike. Historically, these courses have exhibited high failure rates and a notable number of dropouts. Instructors often lament students' lack of effort on their coursework, and students often express frustration that the teaching methods employed are not effective. Drawing inspiration from the successful principles of Extreme Programming, this study introduces an approach—the Extremes-based teaching model—aimed at enhancing the teaching of introductory programming courses. To empirically determine the effectiveness of the model, a comparison was made between a section taught using the extreme-based model and another utilizing traditional teaching methods. Notably, the extreme-based teaching class required students to work collaboratively on projects, while also demanding continuous assessment and performance enhancement within groups. This paper details the application of the extreme-based model within the post-secondary online classroom context and presents the compelling results that emphasize its effectiveness in advancing the teaching and learning experiences. The extreme-based model led to a significant increase of 13.46 points in the weighted total average and a commendable 10% reduction in the failure rate.

Keywords: Extreme-based teaching model, innovative pedagogical methods, project-based learning, team-based learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 130
651 Evolution of the Hydrogen Atom: An Alternative to the Big Bang Theory

Authors: Ghassan H. Halasa

Abstract:

Elementary particles are created in pairs of equal and opposite momentums at a reference frame at the speed of light. The speed of light reference frame is viewed as a point in space as observed by observer at rest. This point in space is the bang location of the big bang theory. The bang in the big bang theory is not more than sustained flow of pairs of positive and negative elementary particles. Electrons and negative charged elementary particles are ejected from this point in space at velocities faster than light, while protons and positively charged particles obtain velocities lower than light. Subsonic masses are found to have real and positive charge, while supersonic masses are found to be negative and imaginary indicating that the two masses are of different entities. The electron-s super-sonic speed, as viewed by rest observer was calculated and found to be less than the speed of light and is little higher than the electron speed in Bohr-s orbit. The newly formed hydrogen gas temperature was found to be in agreement with temperatures found on newly formed stars. Universe expansion was found to be in agreement. Partial mass and charge elementary particles and particles with momentum only were explained in the context of this theoretical approach.

Keywords: Evolution of Matter, Multidimensional spaces, relativity, Big Bang Theory

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635
650 Alternative Methods to Rank the Impact of Object Oriented Metrics in Fault Prediction Modeling using Neural Networks

Authors: Kamaldeep Kaur, Arvinder Kaur, Ruchika Malhotra

Abstract:

The aim of this paper is to rank the impact of Object Oriented(OO) metrics in fault prediction modeling using Artificial Neural Networks(ANNs). Past studies on empirical validation of object oriented metrics as fault predictors using ANNs have focused on the predictive quality of neural networks versus standard statistical techniques. In this empirical study we turn our attention to the capability of ANNs in ranking the impact of these explanatory metrics on fault proneness. In ANNs data analysis approach, there is no clear method of ranking the impact of individual metrics. Five ANN based techniques are studied which rank object oriented metrics in predicting fault proneness of classes. These techniques are i) overall connection weights method ii) Garson-s method iii) The partial derivatives methods iv) The Input Perturb method v) the classical stepwise methods. We develop and evaluate different prediction models based on the ranking of the metrics by the individual techniques. The models based on overall connection weights and partial derivatives methods have been found to be most accurate.

Keywords: Artificial Neural Networks (ANNS), Backpropagation, Fault Prediction Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1757
649 The Investigation of Green Roof and White Roof Cooling Potential on Single Storey Residential Building in the Malaysian Climate

Authors: Asmat Ismail, Muna Hanim Abdul Samad, Abdul Malek Abdul Rahman

Abstract:

The phenomenon of global warming or climate change has led to many environmental issues including higher atmospheric temperatures, intense precipitation, increased greenhouse gaseous emissions and increased indoor discomfort. Studies have shown that bringing nature to the roof such as constructing green roof and implementing high-reflective roof may give positive impact in mitigating the effects of global warming and in increasing thermal comfort sensation inside buildings. However, no study has been conducted to compare both types of passive roof treatments in Malaysia in order to increase thermal comfort in buildings. Therefore, this study is conducted to investigate the effect of green roof and white painted roof as passive roof treatment in improving indoor comfort of Malaysian homes. This study uses an experimental approach in which the measurements of temperatures are conducted on the case study building. The measurements of outdoor and indoor environments were conducted on the flat roof with two different types of roof treatment that are green roof and white roof. The measurement of existing black bare roof was also conducted to act as a control for this study.

Keywords: global warming, green roof, white painted roof, indoor temperature reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2741
648 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: Wavelet transform, computational error, computational duration, strong ground motion data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1373
647 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: Quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
646 Assessing the Effect of Grid Connection of Large-Scale Wind Farms on Power System Small-Signal Angular Stability

Authors: Wenjuan Du, Jingtian Bi, Tong Wang, Haifeng Wang

Abstract:

Grid connection of a large-scale wind farm affects power system small-signal angular stability in two aspects. Firstly, connection of the wind farm brings about the change of load flow and configuration of a power system. Secondly, the dynamic interaction is introduced by the wind farm with the synchronous generators (SGs) in the power system. This paper proposes a method to assess the two aspects of the effect of the wind farm on power system small-signal angular stability. The effect of the change of load flow/system configuration brought about by the wind farm can be examined separately by displacing wind farms with constant power sources, then the effect of the dynamic interaction of the wind farm with the SGs can be also computed individually. Thus, a clearer picture and better understanding on the power system small-signal angular stability as affected by grid connection of the large-scale wind farm are provided. In the paper, an example power system with grid connection of a wind farm is presented to demonstrate the proposed approach.

Keywords: power system small-signal angular stability, power system low-frequency oscillations, electromechanical oscillation modes, wind farms, double fed induction generator (DFIG)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1819
645 Enhanced Particle Swarm Optimization Approach for Solving the Non-Convex Optimal Power Flow

Authors: M. R. AlRashidi, M. F. AlHajri, M. E. El-Hawary

Abstract:

An enhanced particle swarm optimization algorithm (PSO) is presented in this work to solve the non-convex OPF problem that has both discrete and continuous optimization variables. The objective functions considered are the conventional quadratic function and the augmented quadratic function. The latter model presents non-differentiable and non-convex regions that challenge most gradient-based optimization algorithms. The optimization variables to be optimized are the generator real power outputs and voltage magnitudes, discrete transformer tap settings, and discrete reactive power injections due to capacitor banks. The set of equality constraints taken into account are the power flow equations while the inequality ones are the limits of the real and reactive power of the generators, voltage magnitude at each bus, transformer tap settings, and capacitor banks reactive power injections. The proposed algorithm combines PSO with Newton-Raphson algorithm to minimize the fuel cost function. The IEEE 30-bus system with six generating units is used to test the proposed algorithm. Several cases were investigated to test and validate the consistency of detecting optimal or near optimal solution for each objective. Results are compared to solutions obtained using sequential quadratic programming and Genetic Algorithms.

Keywords: Particle Swarm Optimization, Optimal Power Flow, Economic Dispatch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368
644 Supply Chain Resilience Triangle: The Study and Development of a Framework

Authors: M. Bevilacqua, F. E. Ciarapica, G. Marcucci

Abstract:

Supply Chain Resilience has been broadly studied during the last decade, focusing the research on many aspects of Supply Chain performance. Consequently, different definitions of Supply Chain Resilience have been developed by the research community, drawing inspiration also from other fields of study such as ecology, sociology, psychology, economy et al. This way, the definitions so far developed in the extant literature are therefore very heterogeneous, and many authors have pointed out a lack of consensus in this field of analysis. The aim of this research is to find common points between these definitions, through the development of a framework of study: the Resilience Triangle. The Resilience Triangle is a tool developed in the field of civil engineering, with the objective of modeling the loss of resilience of a given structure during and after the occurrence of a disruption such as an earthquake. The Resilience Triangle is a simple yet powerful tool: in our opinion, it can summarize all the features that authors have captured in the Supply Chain Resilience definitions over the years. This research intends to recapitulate within this framework all these heterogeneities in Supply Chain Resilience research. After collecting a various number of Supply Chain Resilience definitions present in the extant literature, the methodology approach provides a taxonomy step with the scope of collecting and analyzing all the data gathered. The next step provides the comparison of the data obtained with the plotting of a disruption profile, in order to contextualize the Resilience Triangle in the Supply Chain context. The tool and the results developed in this research will allow to lay the foundation for future Supply Chain Resilience modeling and measurement work.

Keywords: Supply chain resilience, resilience definition, supply chain resilience triangle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2708
643 Generalized Vortex Lattice Method for Predicting Characteristics of Wings with Flap and Aileron Deflection

Authors: Mondher Yahyaoui

Abstract:

A generalized vortex lattice method for complex lifting surfaces with flap and aileron deflection is formulated. The method is not restricted by the linearized theory assumption and accounts for all standard geometric lifting surface parameters: camber, taper, sweep, washout, dihedral, in addition to flap and aileron deflection. Thickness is not accounted for since the physical lifting body is replaced by a lattice of panels located on the mean camber surface. This panel lattice setup and the treatment of different wake geometries is what distinguish the present work form the overwhelming majority of previous solutions based on the vortex lattice method. A MATLAB code implementing the proposed formulation is developed and validated by comparing our results to existing experimental and numerical ones and good agreement is demonstrated. It is then used to study the accuracy of the widely used classical vortex-lattice method. It is shown that the classical approach gives good agreement in the clean configuration but is off by as much as 30% when a flap or aileron deflection of 30° is imposed. This discrepancy is mainly due the linearized theory assumption associated with the conventional method. A comparison of the effect of four different wake geometries on the values of aerodynamic coefficients was also carried out and it is found that the choice of the wake shape had very little effect on the results.

Keywords: Aileron deflection, camber-surface-bound vortices, classical VLM, Generalized VLM, flap deflection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5056
642 A Case Study to Observe How Students’ Perception of the Possibility of Success Impacts Their Performance in Summative Exams

Authors: Rochelle Elva

Abstract:

Faculty in Higher Education today are faced with the challenge of convincing their students of the importance of the mastery of skills through learning. This is because most students often have a single motivation -to get high grades. If it appears that this goal will not be met, they lose their motivation and their academic efforts wane. This is true even for students in the competitive fields of STEM, including Computer Science majors. As educators, we have to understand our students and leverage what motivates them, to achieve our learning outcomes. This paper presents a case study that utilizes cognitive psychology’s Expectancy-Value Theory and Motivation Theory, to investigate the effect of sustained expectancy for success on students’ learning outcomes. In our case study, we explore how students’ motivation and persistence in their academic efforts are impacted by providing them with an unexpected path to success, which continues to the end of the semester. The approach was tested in an undergraduate computer science course with n = 56. The results of the study indicate that when presented with the real possibility of success, despite existing low grades, both low and high-scoring students persisted in their efforts to improve their performance. Their final grades were on average one place higher on the +/-letter grade scale, with some students scoring as high as three places above their predicted grade.

Keywords: Expectancy for success and persistence, motivation and performance, computer science education, motivation and performance in computer science.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 298
641 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1691
640 The Experiences of Coronary Heart Disease Patients: Biopsychosocial Perspective

Authors: Christopher C. Anyadubalu

Abstract:

Biological, psychological and social experiences and perceptions of healthcare services in patients medically diagnosed of coronary heart disease were investigated using a sample of 10 participants whose responses to the in-depth interview questions were analyzed based on inter-and-intra-case analyses. The results obtained revealed that advancing age, single status, divorce and/or death of spouse and the issue of single parenting negatively impacted patients- biopsychosocial experiences. The patients- experiences of physical signs and symptoms, anxiety and depression, past serious medical conditions, use of self-prescribed medications, family history of poor mental/medical or physical health, nutritional problems and insufficient physical activities heightened their risk of coronary attack. Collectivist culture served as a big source of relieve to the patients. Patients- temperament, experience of different chronic life stresses/challenges, mood alteration, regular drinking, smoking/gambling, and family/social impairments compounded their health situation. Patients were satisfied with the biomedical services rendered by the healthcare personnel, whereas their psychological and social needs were not attended to. Effective procedural treatment model, a holistic and multidimensional approach to the treatment of heart disease patients was proposed.

Keywords: Biopsychosocial, Coronary Heart Disease, Experience, Patients, Perception, Perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2619
639 The Optimal Placement of Capacitor in Order to Reduce Losses and the Profile of Distribution Network Voltage with GA, SA

Authors: Limouzade E., Joorabian M.

Abstract:

Most of the losses in a power system relate to the distribution sector which always has been considered. From the important factors which contribute to increase losses in the distribution system is the existence of radioactive flows. The most common way to compensate the radioactive power in the system is the power to use parallel capacitors. In addition to reducing the losses, the advantages of capacitor placement are the reduction of the losses in the release peak of network capacity and improving the voltage profile. The point which should be considered in capacitor placement is the optimal placement and specification of the amount of the capacitor in order to maximize the advantages of capacitor placement. In this paper, a new technique has been offered for the placement and the specification of the amount of the constant capacitors in the radius distribution network on the basis of Genetic Algorithm (GA). The existing optimal methods for capacitor placement are mostly including those which reduce the losses and voltage profile simultaneously. But the retaliation cost and load changes have not been considered as influential UN the target function .In this article, a holistic approach has been considered for the optimal response to this problem which includes all the parameters in the distribution network: The price of the phase voltage and load changes. So, a vast inquiry is required for all the possible responses. So, in this article, we use Genetic Algorithm (GA) as the most powerful method for optimal inquiry.

Keywords: Genetic Algorithm (GA), capacitor placement, voltage profile, network losses, Simulating Annealing (SA), distribution network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
638 Using Satellite Images Datasets for Road Intersection Detection in Route Planning

Authors: Fatma El-zahraa El-taher, Ayman Taha, Jane Courtney, Susan Mckeever

Abstract:

Understanding road networks plays an important role in navigation applications such as self-driving vehicles and route planning for individual journeys. Intersections of roads are essential components of road networks. Understanding the features of an intersection, from a simple T-junction to larger multi-road junctions is critical to decisions such as crossing roads or selecting safest routes. The identification and profiling of intersections from satellite images is a challenging task. While deep learning approaches offer state-of-the-art in image classification and detection, the availability of training datasets is a bottleneck in this approach. In this paper, a labelled satellite image dataset for the intersection recognition  problem is presented. It consists of 14,692 satellite images of Washington DC, USA. To support other users of the dataset, an automated download and labelling script is provided for dataset replication. The challenges of construction and fine-grained feature labelling of a satellite image dataset are examined, including the issue of how to address features that are spread across multiple images. Finally, the accuracy of detection of intersections in satellite images is evaluated.

Keywords: Satellite images, remote sensing images, data acquisition, autonomous vehicles, robot navigation, route planning, road intersections.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
637 Pectoral Muscles Suppression in Digital Mammograms Using Hybridization of Soft Computing Methods

Authors: I. Laurence Aroquiaraj, K. Thangavel

Abstract:

Breast region segmentation is an essential prerequisite in computerized analysis of mammograms. It aims at separating the breast tissue from the background of the mammogram and it includes two independent segmentations. The first segments the background region which usually contains annotations, labels and frames from the whole breast region, while the second removes the pectoral muscle portion (present in Medio Lateral Oblique (MLO) views) from the rest of the breast tissue. In this paper we propose hybridization of Connected Component Labeling (CCL), Fuzzy, and Straight line methods. Our proposed methods worked good for separating pectoral region. After removal pectoral muscle from the mammogram, further processing is confined to the breast region alone. To demonstrate the validity of our segmentation algorithm, it is extensively tested using over 322 mammographic images from the Mammographic Image Analysis Society (MIAS) database. The segmentation results were evaluated using a Mean Absolute Error (MAE), Hausdroff Distance (HD), Probabilistic Rand Index (PRI), Local Consistency Error (LCE) and Tanimoto Coefficient (TC). The hybridization of fuzzy with straight line method is given more than 96% of the curve segmentations to be adequate or better. In addition a comparison with similar approaches from the state of the art has been given, obtaining slightly improved results. Experimental results demonstrate the effectiveness of the proposed approach.

Keywords: X-ray Mammography, CCL, Fuzzy, Straight line.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755
636 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: Graph cuts, lung CT scan, lung parenchyma segmentation, patch based similarity metric.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 743
635 An Exploration of Sense of Place as Informative for Spatial Planning Guidelines: A Case Study of the Vredefort Dome World Heritage Site, South Africa

Authors: Karen Puren, Ernst Drewes, Vera Roos

Abstract:

This paper explores the sense of place in the Vredefort Dome World Heritage site, South Africa, as an essential input for the formulation of spatial planning proposals for the area. Intangible aspects such as personal and symbolic meanings of sites are currently not integrated in spatial planning in South Africa. This may have a detrimental effect on local inhabitants who have a long history with the site and built up a strong place identity. Involving local inhabitants at an early stage of the planning process and incorporating their attitudes and opinions in future intervention in the area, may also contribute to the acceptance of the legitimacy of future policy. An interdisciplinary and mixed-method research approach was followed in this study in order to identify possible ways to anchor spatial planning proposals in the identity of the place. In essence, the qualitative study revealed that inhabitants reflect a deep and personal relationship with and within the area, which contributes significantly to their sense of emotional security and selfidentity. Results include a strong conservation-orientated attitude with regard to the natural rural character of the site, especially in the inner core.

Keywords: Place identity, Sense of Place, Spatial Planning, Vredefort Dome World Heritage Site.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2573
634 A Trainable Neural Network Ensemble for ECG Beat Classification

Authors: Atena Sajedin, Shokoufeh Zakernejad, Soheil Faridi, Mehrdad Javadi, Reza Ebrahimpour

Abstract:

This paper illustrates the use of a combined neural network model for classification of electrocardiogram (ECG) beats. We present a trainable neural network ensemble approach to develop customized electrocardiogram beat classifier in an effort to further improve the performance of ECG processing and to offer individualized health care. We process a three stage technique for detection of premature ventricular contraction (PVC) from normal beats and other heart diseases. This method includes a denoising, a feature extraction and a classification. At first we investigate the application of stationary wavelet transform (SWT) for noise reduction of the electrocardiogram (ECG) signals. Then feature extraction module extracts 10 ECG morphological features and one timing interval feature. Then a number of multilayer perceptrons (MLPs) neural networks with different topologies are designed. The performance of the different combination methods as well as the efficiency of the whole system is presented. Among them, Stacked Generalization as a proposed trainable combined neural network model possesses the highest recognition rate of around 95%. Therefore, this network proves to be a suitable candidate in ECG signal diagnosis systems. ECG samples attributing to the different ECG beat types were extracted from the MIT-BIH arrhythmia database for the study.

Keywords: ECG beat Classification; Combining Classifiers;Premature Ventricular Contraction (PVC); Multi Layer Perceptrons;Wavelet Transform

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
633 Developing Leadership and Teamwork Skills of Pre-Service Teacher through Learning Camp

Authors: Sirimanee Banjong

Abstract:

This study aimed to 1) develop pre-service teachers’ leadership skills through camp-based learning, and 2) develop preservice teachers’ teamwork skills through camp-based learning. An applied research methodology was used. The target group was derived from a purposive selection. It involved 32 fourth-year students in Early Childhood Education Program enrolling a course entitled Seminar in Early Childhood Education provided during second semester of academic year 2013. The treatment was camp-based learning activities which applied a PDCA process including four stages: 1) plan, 2) do, 3) check, and 4) act. Research instruments were a learning camp program, a camp-based learning management plan, a 5-level assessment form for leadership skills and a 5-level assessment form for assessing teamwork skills. Data were analyzed using descriptive statistics. Results were: 1) pre-service teachers’ leadership skills yielded the before treatment average score at x= 3.4, S.D.=0.6 2and the after-treatment average score at x 4.29 , S.D.=0.66 pre-service teachers’ teamwork skills yielded the before-treatment average score at x=3.31, S.D.=0.60 and the after-treatment average score at x=4.42, S.D.=0.66 Both differences were statistically significant at the .05 level. Thus, the pre-service teachers’ leadership and teamwork skills were significantly improved through the camp-based learning approach.

Keywords: Learning camp, leadership skills, teamwork skills.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350