Search results for: asynchronous input
1200 Modelling and Optimization of Laser Cutting Operations
Authors: Hany Mohamed Abdu, Mohamed Hassan Gadallah, El-Giushi Mokhtar, Yehia Mahmoud Ismail
Abstract:
Laser beam cutting is one nontraditional machining process. This paper optimizes the parameters of Laser beam cutting machining parameters of Stainless steel (316L) by considering the effect of input parameters viz. power, oxygen pressure, frequency and cutting speed. Statistical design of experiments are carried in three different levels and process responses such as 'Average kerf taper (Ta)' and 'Surface Roughness (Ra)' are measured accordingly. A quadratic mathematical model (RSM) for each of the responses is developed as a function of the process parameters. Responses predicted by the models (as per Taguchi’s L27 OA) are employed to search for an optimal parametric combination to achieve desired yield of the process. RSM models are developed for mean responses, S/N ratio, and standard deviation of responses. Optimization models are formulated as single objective problem subject to process constraints. Models are formulated based on Analysis of Variance (ANOVA) using MATLAB environment. Optimum solutions are compared with Taguchi Methodology results.Keywords: optimization, laser cutting, robust design, kerf width, Taguchi method, RSM and DOE
Procedia PDF Downloads 6201199 ANN Based Simulation of PWM Scheme for Seven Phase Voltage Source Inverter Using MATLAB/Simulink
Authors: Mohammad Arif Khan
Abstract:
This paper analyzes and presents the development of Artificial Neural Network based controller of space vector modulation (ANN-SVPWM) for a seven-phase voltage source inverter. At first, the conventional method of producing sinusoidal output voltage by utilizing six active and one zero space vectors are used to synthesize the input reference, is elaborated and then new PWM scheme called Artificial Neural Network Based PWM is presented. The ANN based controller has the advantage of the very fast implementation and analyzing the algorithms and avoids the direct computation of trigonometric and non-linear functions. The ANN controller uses the individual training strategy with the fixed weight and supervised models. A computer simulation program has been developed using Matlab/Simulink together with the neural network toolbox for training the ANN-controller. A comparison of the proposed scheme with the conventional scheme is presented based on various performance indices. Extensive Simulation results are provided to validate the findings.Keywords: space vector PWM, total harmonic distortion, seven-phase, voltage source inverter, multi-phase, artificial neural network
Procedia PDF Downloads 4521198 Approaches and Strategies Used to Increase Student Engagement in Blended Learning Courses
Authors: Pinar Ozdemir Ayber, Zeina Hojeij
Abstract:
Blended Learning (BL) is a rapidly growing teaching and learning approach, which brings together the best of both face-to-face and online learning to expand learning opportunities for students. However, there is limited research on the practices, opportunities and quality of instruction in Blended Classrooms, and on the role of the teaching faculty as well as the learners in these types of classes. This paper will highlight the researchers’ experiences and reflections on blending their classes. It will focus on the importance of designing effective lesson plans that emphasize learner engagement and motivation in alignment with course learning outcomes. In addition, it will identify the changing roles of the teacher and the learners and suggest appropriate variations to the traditional classroom setting taking into consideration the benefits and the challenges of the Blended Classroom. It is hoped that this paper would provide sufficient input for participants to reflect on ways they can blend their own lessons to promote ubiquitous learning and student autonomy. Practical tips and ideas will be shared with the participants on various strategies and technologies that were used in the researchers’ classes.Keywords: blended learning, learner autonomy, learner engagement, learner motivation, mobile learning tools
Procedia PDF Downloads 3031197 Materialized View Effect on Query Performance
Authors: Yusuf Ziya Ayık, Ferhat Kahveci
Abstract:
Currently, database management systems have various tools such as backup and maintenance, and also provide statistical information such as resource usage and security. In terms of query performance, this paper covers query optimization, views, indexed tables, pre-computation materialized view, query performance analysis in which query plan alternatives can be created and the least costly one selected to optimize a query. Indexes and views can be created for related table columns. The literature review of this study showed that, in the course of time, despite the growing capabilities of the database management system, only database administrators are aware of the need for dealing with archival and transactional data types differently. These data may be constantly changing data used in everyday life, and also may be from the completed questionnaire whose data input was completed. For both types of data, the database uses its capabilities; but as shown in the findings section, instead of repeating similar heavy calculations which are carrying out same results with the same query over a survey results, using materialized view results can be in a more simple way. In this study, this performance difference was observed quantitatively considering the cost of the query.Keywords: cost of query, database management systems, materialized view, query performance
Procedia PDF Downloads 2801196 Optimized Processing of Neural Sensory Information with Unwanted Artifacts
Authors: John Lachapelle
Abstract:
Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors
Procedia PDF Downloads 3301195 Medical Experience: Usability Testing of Displaying Computed Tomography Scans and Magnetic Resonance Imaging in Virtual and Augmented Reality for Accurate Diagnosis
Authors: Alyona Gencheva
Abstract:
The most common way to study diagnostic results is using specialized programs at a stationary workplace. Magnetic Resonance Imaging is presented in a two-dimensional (2D) format, and Computed Tomography sometimes looks like a three-dimensional (3D) model that can be interacted with. The main idea of the research is to compare ways of displaying diagnostic results in virtual reality that can help a surgeon during or before an operation in augmented reality. During the experiment, the medical staff examined liver vessels in the abdominal area and heart boundaries. The search time and detection accuracy were measured on black-and-white and coloured scans. Usability testing in virtual reality shows convenient ways of interaction like hand input, voice activation, displaying risk to the patient, and the required number of scans. The results of the experiment will be used in the new C# program based on Magic Leap technology.Keywords: augmented reality, computed tomography, magic leap, magnetic resonance imaging, usability testing, VTE risk
Procedia PDF Downloads 1121194 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers
Procedia PDF Downloads 611193 Performance Evaluation of Refinement Method for Wideband Two-Beams Formation
Authors: C. Bunsanit
Abstract:
This paper presents the refinement method for two beams formation of wideband smart antenna. The refinement method for weighting coefficients is based on Fully Spatial Signal Processing by taking Inverse Discrete Fourier Transform (IDFT), and its simulation results are presented using MATLAB. The radiation pattern is created by multiplying the incoming signal with real weights and then summing them together. These real weighting coefficients are computed by IDFT method; however, the range of weight values is relatively wide. Therefore, for reducing this range, the refinement method is used. The radiation pattern concerns with five input parameters to control. These parameters are maximum weighting coefficient, wideband signal, direction of mainbeam, beamwidth, and maximum of minor lobe level. Comparison of the obtained simulation results between using refinement method and taking only IDFT shows that the refinement method works well for wideband two beams formation.Keywords: fully spatial signal processing, beam forming, refinement method, smart antenna, weighting coefficient, wideband
Procedia PDF Downloads 2261192 Artificial Neural Network in Predicting the Soil Response in the Discrete Element Method Simulation
Authors: Zhaofeng Li, Jun Kang Chow, Yu-Hsing Wang
Abstract:
This paper attempts to bridge the soil properties and the mechanical response of soil in the discrete element method (DEM) simulation. The artificial neural network (ANN) was therefore adopted, aiming to reproduce the stress-strain-volumetric response when soil properties are given. 31 biaxial shearing tests with varying soil parameters (e.g., initial void ratio and interparticle friction coefficient) were generated using the DEM simulations. Based on these 45 sets of training data, a three-layer neural network was established which can output the entire stress-strain-volumetric curve during the shearing process from the input soil parameters. Beyond the training data, 2 additional sets of data were generated to examine the validity of the network, and the stress-strain-volumetric curves for both cases were well reproduced using this network. Overall, the ANN was found promising in predicting the soil behavior and reducing repetitive simulation work.Keywords: artificial neural network, discrete element method, soil properties, stress-strain-volumetric response
Procedia PDF Downloads 3951191 Correction Factor to Enhance the Non-Standard Hammer Effect Used in Standard Penetration Test
Authors: Khaled R. Khater
Abstract:
The weight of the SPT hammer is standard (0.623kN). The locally manufacturer drilling rigs use hammers, sometimes deviating off the standard weight. This affects the field measured blow counts (Nf) consequentially, affecting most of correlations previously obtained, as they were obtained based on standard hammer weight. The literature presents energy corrections factor (η2) to be applied to the SPT total input energy. This research investigates the effect of the hammer weight variation, as a single parameter, on the field measured blow counts (Nf). The outcome is a correction factor (ηk), equation, and correction chart. They are recommended to adjust back the measured misleading (Nf) to the standard one as if the standard hammer is used. This correction is very important to be done in such cases where a non-standard hammer is being used because the bore logs in any geotechnical report should contain true and representative values (Nf), let alone the long records of correlations, already in hand. The study here-in is achieved by using laboratory physical model to simulate the SPT dripping hammer mechanism. It is designed to allow different hammer weights to be used. Also, it is manufactured to avoid and eliminate the energy loss sources. This produces a transmitted efficiency up to 100%.Keywords: correction factors, hammer weight, physical model, standard penetration test
Procedia PDF Downloads 3871190 Impact of Similarity Ratings on Human Judgement
Authors: Ian A. McCulloh, Madelaine Zinser, Jesse Patsolic, Michael Ramos
Abstract:
Recommender systems are a common artificial intelligence (AI) application. For any given input, a search system will return a rank-ordered list of similar items. As users review returned items, they must decide when to halt the search and either revise search terms or conclude their requirement is novel with no similar items in the database. We present a statistically designed experiment that investigates the impact of similarity ratings on human judgement to conclude a search item is novel and halt the search. 450 participants were recruited from Amazon Mechanical Turk to render judgement across 12 decision tasks. We find the inclusion of ratings increases the human perception that items are novel. Percent similarity increases novelty discernment when compared with star-rated similarity or the absence of a rating. Ratings reduce the time to decide and improve decision confidence. This suggests the inclusion of similarity ratings can aid human decision-makers in knowledge search tasks.Keywords: ratings, rankings, crowdsourcing, empirical studies, user studies, similarity measures, human-centered computing, novelty in information retrieval
Procedia PDF Downloads 1311189 Earthquakes and Buildings: Lesson Learnt from Past Earthquakes in Turkey
Authors: Yavuz Yardım
Abstract:
The most important criteria for structural engineering is the structure’s ability to carry intended loads safely. The key element of this ability is mathematical modeling of really loadings situation into a simple loads input to use in structure analysis and design. Amongst many different types of loads, the most challenging load is earthquake load. It is possible magnitude is unclear and timing is unknown. Therefore the concept of intended loads and safety have been built on experience of previous earthquake impact on the structures. Understanding and developing these concepts is achieved by investigating performance of the structures after real earthquakes. Damage after an earthquake provide results of thousands of full-scale structure test under a real seismic load. Thus, Earthquakes reveille all the weakness, mistakes and deficiencies of analysis, design rules and practice. This study deals with lesson learnt from earthquake recoded last two decades in Turkey. Results of investigation after several earthquakes exposes many deficiencies in structural detailing, inappropriate design, wrong architecture layout, and mainly mistake in construction practice.Keywords: earthquake, seismic assessment, RC buildings, building performance
Procedia PDF Downloads 2641188 The Change of Urban Land Use/Cover Using Object Based Approach for Southern Bali
Authors: I. Gusti A. A. Rai Asmiwyati, Robert J. Corner, Ashraf M. Dewan
Abstract:
Change on land use/cover (LULC) dominantly affects spatial structure and function. It can have such impacts by disrupting social culture practice and disturbing physical elements. Thus, it has become essential to understand of the dynamics in time and space of LULC as it can be used as a critical input for developing sustainable LULC. This study was an attempt to map and monitor the LULC change in Bali Indonesia from 2003 to 2013. Using object based classification to improve the accuracy, and change detection, multi temporal land use/cover data were extracted from a set of ASTER satellite image. The overall accuracies of the classification maps of 2003 and 2013 were 86.99% and 80.36%, respectively. Built up area and paddy field were the dominant type of land use/cover in both years. Patch increase dominantly in 2003 illustrated the rapid paddy field fragmentation and the huge occurring transformation. This approach is new for the case of diverse urban features of Bali that has been growing fast and increased the classification accuracy than the manual pixel based classification.Keywords: land use/cover, urban, Bali, ASTER
Procedia PDF Downloads 5401187 Object-Centric Process Mining Using Process Cubes
Authors: Anahita Farhang Ghahfarokhi, Alessandro Berti, Wil M.P. van der Aalst
Abstract:
Process mining provides ways to analyze business processes. Common process mining techniques consider the process as a whole. However, in real-life business processes, different behaviors exist that make the overall process too complex to interpret. Process comparison is a branch of process mining that isolates different behaviors of the process from each other by using process cubes. Process cubes organize event data using different dimensions. Each cell contains a set of events that can be used as an input to apply process mining techniques. Existing work on process cubes assume single case notions. However, in real processes, several case notions (e.g., order, item, package, etc.) are intertwined. Object-centric process mining is a new branch of process mining addressing multiple case notions in a process. To make a bridge between object-centric process mining and process comparison, we propose a process cube framework, which supports process cube operations such as slice and dice on object-centric event logs. To facilitate the comparison, the framework is integrated with several object-centric process discovery approaches.Keywords: multidimensional process mining, mMulti-perspective business processes, OLAP, process cubes, process discovery, process mining
Procedia PDF Downloads 2551186 Regulated Output Voltage Double Switch Buck-Boost Converter for Photovoltaic Energy Application
Authors: M. Kaouane, A. Boukhelifa, A. Cheriti
Abstract:
In this paper, a new Buck-Boost DC-DC converter is designed and simulated for photovoltaic energy system. The presented Buck-Boost converter has a double switch. Moreover, its output voltage is regulated to a constant value whatever its input is. In the presented work, the Buck-Boost transfers the produced energy from the photovoltaic generator to an R-L load. The converter is controlled by the pulse width modulation technique in a way to have a suitable output voltage, in the other hand, to carry the generator’s power, and put it close to the maximum possible power that can be generated by introducing the right duty cycle of the pulse width modulation signals that control the switches of the converter; each component and each parameter of the proposed circuit is well calculated using the equations that describe each operating mode of the converter. The proposed configuration of Buck-Boost converter has been simulated in Matlab/Simulink environment; the simulation results show that it is a good choice to take in order to maintain the output voltage constant while ensuring a good energy transfer.Keywords: Buck-Boost converter, switch, photovoltaic, PWM, power, energy transfer
Procedia PDF Downloads 9051185 A Generalized Space-Efficient Algorithm for Quantum Bit String Comparators
Authors: Khuram Shahzad, Omar Usman Khan
Abstract:
Quantum bit string comparators (QBSC) operate on two sequences of n-qubits, enabling the determination of their relationships, such as equality, greater than, or less than. This is analogous to the way conditional statements are used in programming languages. Consequently, QBSCs play a crucial role in various algorithms that can be executed or adapted for quantum computers. The development of efficient and generalized comparators for any n-qubit length has long posed a challenge, as they have a high-cost footprint and lead to quantum delays. Comparators that are efficient are associated with inputs of fixed length. As a result, comparators without a generalized circuit cannot be employed at a higher level, though they are well-suited for problems with limited size requirements. In this paper, we introduce a generalized design for the comparison of two n-qubit logic states using just two ancillary bits. The design is examined on the basis of qubit requirements, ancillary bit usage, quantum cost, quantum delay, gate operations, and circuit complexity and is tested comprehensively on various input lengths. The work allows for sufficient flexibility in the design of quantum algorithms, which can accelerate quantum algorithm development.Keywords: quantum comparator, quantum algorithm, space-efficient comparator, comparator
Procedia PDF Downloads 151184 Image Encryption Using Eureqa to Generate an Automated Mathematical Key
Authors: Halima Adel Halim Shnishah, David Mulvaney
Abstract:
Applying traditional symmetric cryptography algorithms while computing encryption and decryption provides immunity to secret keys against different attacks. One of the popular techniques generating automated secret keys is evolutionary computing by using Eureqa API tool, which got attention in 2013. In this paper, we are generating automated secret keys for image encryption and decryption using Eureqa API (tool which is used in evolutionary computing technique). Eureqa API models pseudo-random input data obtained from a suitable source to generate secret keys. The validation of generated secret keys is investigated by performing various statistical tests (histogram, chi-square, correlation of two adjacent pixels, correlation between original and encrypted images, entropy and key sensitivity). Experimental results obtained from methods including histogram analysis, correlation coefficient, entropy and key sensitivity, show that the proposed image encryption algorithms are secure and reliable, with the potential to be adapted for secure image communication applications.Keywords: image encryption algorithms, Eureqa, statistical measurements, automated key generation
Procedia PDF Downloads 4821183 Cooperative Diversity Scheme Based on MIMO-OFDM in Small Cell Network
Authors: Dong-Hyun Ha, Young-Min Ko, Chang-Bin Ha, Hyoung-Kyu Song
Abstract:
In Heterogeneous network (HetNet) can provide high quality of a service in a wireless communication system by composition of small cell networks. The composition of small cell networks improves cell coverage and capacity to the mobile users.Recently, various techniques using small cell networks have been researched in the wireless communication system. In this paper, the cooperative scheme obtaining high reliability is proposed in the small cell networks. The proposed scheme suggests a cooperative small cell system and the new signal transmission technique in the proposed system model. The new signal transmission technique applies a cyclic delay diversity (CDD) scheme based on the multiple input multiple output-orthogonal frequency division multiplexing (MIMO-OFDM) system to obtain improved performance. The improved performance of the proposed scheme is confirmed by the simulation results.Keywords: adaptive transmission, cooperative communication, diversity gain, OFDM
Procedia PDF Downloads 5021182 Model-Free Distributed Control of Dynamical Systems
Authors: Javad Khazaei, Rick Blum
Abstract:
Distributed control is an efficient and flexible approach for coordination of multi-agent systems. One of the main challenges in designing a distributed controller is identifying the governing dynamics of the dynamical systems. Data-driven system identification is currently undergoing a revolution. With the availability of high-fidelity measurements and historical data, model-free identification of dynamical systems can facilitate the control design without tedious modeling of high-dimensional and/or nonlinear systems. This paper develops a distributed control design using consensus theory for linear and nonlinear dynamical systems using sparse identification of system dynamics. Compared with existing consensus designs that heavily rely on knowing the detailed system dynamics, the proposed model-free design can accurately capture the dynamics of the system with available measurements and input data and provide guaranteed performance in consensus and tracking problems. Heterogeneous damped oscillators are chosen as examples of dynamical system for validation purposes.Keywords: consensus tracking, distributed control, model-free control, sparse identification of dynamical systems
Procedia PDF Downloads 2651181 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information
Authors: A. Preetha Priyadharshini, S. B. M. Priya
Abstract:
In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information
Procedia PDF Downloads 8131180 Voltage Problem Location Classification Using Performance of Least Squares Support Vector Machine LS-SVM and Learning Vector Quantization LVQ
Authors: M. Khaled Abduesslam, Mohammed Ali, Basher H. Alsdai, Muhammad Nizam Inayati
Abstract:
This paper presents the voltage problem location classification using performance of Least Squares Support Vector Machine (LS-SVM) and Learning Vector Quantization (LVQ) in electrical power system for proper voltage problem location implemented by IEEE 39 bus New-England. The data was collected from the time domain simulation by using Power System Analysis Toolbox (PSAT). Outputs from simulation data such as voltage, phase angle, real power and reactive power were taken as input to estimate voltage stability at particular buses based on Power Transfer Stability Index (PTSI).The simulation data was carried out on the IEEE 39 bus test system by considering load bus increased on the system. To verify of the proposed LS-SVM its performance was compared to Learning Vector Quantization (LVQ). The results showed that LS-SVM is faster and better as compared to LVQ. The results also demonstrated that the LS-SVM was estimated by 0% misclassification whereas LVQ had 7.69% misclassification.Keywords: IEEE 39 bus, least squares support vector machine, learning vector quantization, voltage collapse
Procedia PDF Downloads 4411179 Vulnerability of Groundwater to Pollution in Akwa Ibom State, Southern Nigeria, using the DRASTIC Model and Geographic Information System (GIS)
Authors: Aniedi A. Udo, Magnus U. Igboekwe, Rasaaq Bello, Francis D. Eyenaka, Michael C. Ohakwere-Eze
Abstract:
Groundwater vulnerability to pollution was assessed in Akwa Ibom State, Southern Nigeria, with the aim of locating areas with high potentials for resource contamination, especially due to anthropogenic influence. The electrical resistivity method was utilized in the collection of the initial field data. Additional data input, which included depth to static water level, drilled well log data, aquifer recharge data, percentage slope, as well as soil information, were sourced from secondary sources. The initial field data were interpreted both manually and with computer modeling to provide information on the geoelectric properties of the subsurface. Interpreted results together with the secondary data were used to develop the DRASTIC thematic maps. A vulnerability assessment was performed using the DRASTIC model in a GIS environment and areas with high vulnerability which needed immediate attention was clearly mapped out and presented using an aquifer vulnerability map. The model was subjected to validation and the rate of validity was 73% within the area of study.Keywords: groundwater, vulnerability, DRASTIC model, pollution
Procedia PDF Downloads 2071178 Simulation of Flow through Dam Foundation by FEM and ANN Methods Case Study: Shahid Abbaspour Dam
Authors: Mehrdad Shahrbanozadeh, Gholam Abbas Barani, Saeed Shojaee
Abstract:
In this study, a finite element (Seep3D model) and an artificial neural network (ANN) model were developed to simulate flow through dam foundation. Seep3D model is capable of simulating three-dimensional flow through a heterogeneous and anisotropic, saturated and unsaturated porous media. Flow through the Shahid Abbaspour dam foundation has been used as a case study. The FEM with 24960 triangular elements and 28707 nodes applied to model flow through foundation of this dam. The FEM being made denser in the neighborhood of the curtain screen. The ANN model developed for Shahid Abbaspour dam is a feedforward four layer network employing the sigmoid function as an activator and the back-propagation algorithm for the network learning. The water level elevations of the upstream and downstream of the dam have been used as input variables and the piezometric heads as the target outputs in the ANN model. The two models are calibrated and verified using the Shahid Abbaspour’s dam piezometric data. Results of the models were compared with those measured by the piezometers which are in good agreement. The model results also revealed that the ANN model performed as good as and in some cases better than the FEM.Keywords: seepage, dam foundation, finite element method, neural network, seep 3D model
Procedia PDF Downloads 4731177 Nanotechnology: A New Revolution to Increase Agricultural Production
Authors: Reshu Chaudhary, R. S. Sengar
Abstract:
To increase the agricultural production Indian farmer needs to aware of the latest technology i.e. precision farming to maximize the crop yield and minimize the input (fertilizer, pesticide etc.) through monitoring the environmental factors. Biotechnology and information technology have provided lots of opportunities for the development of agriculture. But, still we have to do much more for increasing our agricultural production in order to achieve the target growth of agriculture to secure food, to eliminate poverty and improve living style, to enhance agricultural exports and national income and to improve quality of agricultural products. Nanotechnology can be a great element to satisfy these requirements and to boost the multi-dimensional development of agriculture in order to fulfill the dream of Indian farmers. Nanotechnology is the most rapidly growing area of science and technology with its application in physical science, chemical science, life science, material science and earth science. Nanotechnology is a part of any nation’s future. Research in nanotechnology has extremely high potential to benefit society through application in agricultural sciences. Nanotechnology has greater potential to bring revolution in the agricultural sector.Keywords: agriculture, biotechnology, crop yield, nanotechnology
Procedia PDF Downloads 3611176 Formulation of Optimal Shifting Sequence for Multi-Speed Automatic Transmission
Authors: Sireesha Tamada, Debraj Bhattacharjee, Pranab K. Dan, Prabha Bhola
Abstract:
The most important component in an automotive transmission system is the gearbox which controls the speed of the vehicle. In an automatic transmission, the right positioning of actuators ensures efficient transmission mechanism embodiment, wherein the challenge lies in formulating the number of actuators associated with modelling a gearbox. Data with respect to actuation and gear shifting sequence has been retrieved from the available literature, including patent documents, and has been used in this proposed heuristics based methodology for modelling actuation sequence in a gear box. This paper presents a methodological approach in designing a gearbox for the purpose of obtaining an optimal shifting sequence. The computational model considers factors namely, the number of stages and gear teeth as input parameters since these two are the determinants of the gear ratios in an epicyclic gear train. The proposed transmission schematic or stick diagram aids in developing the gearbox layout design. The number of iterations and development time required to design a gearbox layout is reduced by using this approach.Keywords: automatic transmission, gear-shifting, multi-stage planetary gearbox, rank ordered clustering
Procedia PDF Downloads 3251175 Internal Leakage Analysis from Pd to Pc Port Direction in ECV Body Used in External Variable Type A/C Compressor
Authors: M. Iqbal Mahmud, Haeng Muk Cho, Seo Hyun Sang, Wang Wen Hai, Chang Heon Yi, Man Ik Hwang, Dae Hoon Kang
Abstract:
Solenoid operated electromagnetic control valve (ECV) playing an important role for car’s air conditioning control system. ECV is used in external variable displacement swash plate type compressor and controls the entire air conditioning system by means of a pulse width modulation (PWM) input signal supplying from an external source (controller). Complete form of ECV contains number of internal features like valve body, core, valve guide, plunger, guide pin, plunger spring, bellows etc. While designing the ECV; dimensions of different internal items must meet the standard requirements as it is quite challenging. In this research paper, especially the dimensioning of ECV body and its three pressure ports through which the air/refrigerant passes are considered. Here internal leakage test analysis of ECV body is being carried out from its discharge port (Pd) to crankcase port (Pc) when the guide valve is placed inside it. The experiments have made both in ordinary and digital system using different assumptions and thereafter compare the results.Keywords: electromagnetic control valve (ECV), leakage, pressure port, valve body, valve guide
Procedia PDF Downloads 4081174 Finding Data Envelopment Analysis Targets Using Multi-Objective Programming in DEA-R with Stochastic Data
Authors: R. Shamsi, F. Sharifi
Abstract:
In this paper, we obtain the projection of inefficient units in data envelopment analysis (DEA) in the case of stochastic inputs and outputs using the multi-objective programming (MOP) structure. In some problems, the inputs might be stochastic while the outputs are deterministic, and vice versa. In such cases, we propose a multi-objective DEA-R model because in some cases (e.g., when unnecessary and irrational weights by the BCC model reduce the efficiency score), an efficient decision-making unit (DMU) is introduced as inefficient by the BCC model, whereas the DMU is considered efficient by the DEA-R model. In some other cases, only the ratio of stochastic data may be available (e.g., the ratio of stochastic inputs to stochastic outputs). Thus, we provide a multi-objective DEA model without explicit outputs and prove that the input-oriented MOP DEA-R model in the invariable return to scale case can be replaced by the MOP-DEA model without explicit outputs in the variable return to scale and vice versa. Using the interactive methods for solving the proposed model yields a projection corresponding to the viewpoint of the DM and the analyst, which is nearer to reality and more practical. Finally, an application is provided.Keywords: DEA-R, multi-objective programming, stochastic data, data envelopment analysis
Procedia PDF Downloads 1061173 Experimental Investigation on Sustainable Machining of Hastelloy C-276 Utilizing Different Cooling Strategies
Authors: Balkar Singh, Gurpreet Singh, Vivek Aggarwal, Sehijpal Singh
Abstract:
The present research focused to improve the machinability of Hastelloy C-276 at different machining speeds such as 31, 55, and 79 m/min. The use of CO2 gas and Minimum quantity lubrication (MQL) was applied as coolant and lubrication purposes to enhance the machinability of the superalloy. The output in the form of surface roughness (S.R) and heat generation was monitored under dry, MQL, and MQL-CO2-cooled conditions. The Design of the Experiment was prepared using MINITAB software utilizing Taguchi L-27 orthogonal arrays followed by ANOVA analysis for finding the impact of input variables on output responses. At different speeds and lubrication conditions, different behavioral patterns for Surface Roughness and the temperature was observed. ANOVA analysis depicted that the cooling environment impacted the S.R. majorly (50%) followed by cutting speed (29.84%), feed rate (5.09%), and least through depth of cut (4.95%). On the other side, the temperature was greatly influenced by cutting speed (69.12%), Cryo-MQL (8.09%), feed rate (7.59%), and depth of cut (6.20%). Experimental results revealed that Cryo-MQL cooling enhanced the Surface roughness by 12% compared to MQL condition.Keywords: Hastelloy C-276, minimum quantity lubrication, olive oil, cryogenic Cooling (CO2)
Procedia PDF Downloads 1421172 Application and Assessment of Artificial Neural Networks for Biodiesel Iodine Value Prediction
Authors: Raquel M. De sousa, Sofiane Labidi, Allan Kardec D. Barros, Alex O. Barradas Filho, Aldalea L. B. Marques
Abstract:
Several parameters are established in order to measure biodiesel quality. One of them is the iodine value, which is an important parameter that measures the total unsaturation within a mixture of fatty acids. Limitation of unsaturated fatty acids is necessary since warming of a higher quantity of these ones ends in either formation of deposits inside the motor or damage of lubricant. Determination of iodine value by official procedure tends to be very laborious, with high costs and toxicity of the reagents, this study uses an artificial neural network (ANN) in order to predict the iodine value property as an alternative to these problems. The methodology of development of networks used 13 esters of fatty acids in the input with convergence algorithms of backpropagation type were optimized in order to get an architecture of prediction of iodine value. This study allowed us to demonstrate the neural networks’ ability to learn the correlation between biodiesel quality properties, in this case iodine value, and the molecular structures that make it up. The model developed in the study reached a correlation coefficient (R) of 0.99 for both network validation and network simulation, with Levenberg-Maquardt algorithm.Keywords: artificial neural networks, biodiesel, iodine value, prediction
Procedia PDF Downloads 6061171 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design
Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan
Abstract:
Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.Keywords: banking system, Data Envelopment Analysis (DEA), Integrated Resilience Engineering (IRE), performance evaluation, perturbation analysis
Procedia PDF Downloads 188