Search results for: efficient crow search algorithm
8020 Automated, Objective Assessment of Pilot Performance in Simulated Environment
Authors: Maciej Zasuwa, Grzegorz Ptasinski, Antoni Kopyt
Abstract:
Nowadays flight simulators offer tremendous possibilities for safe and cost-effective pilot training, by utilization of powerful, computational tools. Due to technology outpacing methodology, vast majority of training related work is done by human instructors. It makes assessment not efficient, and vulnerable to instructors’ subjectivity. The research presents an Objective Assessment Tool (gOAT) developed at the Warsaw University of Technology, and tested on SW-4 helicopter flight simulator. The tool uses database of the predefined manoeuvres, defined and integrated to the virtual environment. These were implemented, basing on Aeronautical Design Standard Performance Specification Handling Qualities Requirements for Military Rotorcraft (ADS-33), with predefined Mission-Task-Elements (MTEs). The core element of the gOAT enhanced algorithm that provides instructor a new set of information. In details, a set of objective flight parameters fused with report about psychophysical state of the pilot. While the pilot performs the task, the gOAT system automatically calculates performance using the embedded algorithms, data registered by the simulator software (position, orientation, velocity, etc.), as well as measurements of physiological changes of pilot’s psychophysiological state (temperature, sweating, heart rate). Complete set of measurements is presented on-line to instructor’s station and shown in dedicated graphical interface. The presented tool is based on open source solutions, and flexible for editing. Additional manoeuvres can be easily added using guide developed by authors, and MTEs can be changed by instructor even during an exercise. Algorithm and measurements used allow not only to implement basic stress level measurements, but also to reduce instructor’s workload significantly. Tool developed can be used for training purpose, as well as periodical checks of the aircrew. Flexibility and ease of modifications allow the further development to be wide ranged, and the tool to be customized. Depending on simulation purpose, gOAT can be adjusted to support simulator of aircraft, helicopter, or unmanned aerial vehicle (UAV).Keywords: automated assessment, flight simulator, human factors, pilot training
Procedia PDF Downloads 1508019 Wavelet Based Advanced Encryption Standard Algorithm for Image Encryption
Authors: Ajish Sreedharan
Abstract:
With the fast evolution of digital data exchange, security information becomes much important in data storage and transmission. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. As encryption process is applied to the whole image in AES ,it is difficult to improve the efficiency. In this paper, wavelet decomposition is used to concentrate the main information of image to the low frequency part. Then, AES encryption is applied to the low frequency part. The high frequency parts are XORed with the encrypted low frequency part and a wavelet reconstruction is applied. Theoretical analysis and experimental results show that the proposed algorithm has high efficiency, and satisfied security suits for image data transmission.Keywords: discrete wavelet transforms, AES, dynamic SBox
Procedia PDF Downloads 4328018 Conceptual Model for Massive Open Online Blended Courses Based on Disciplines’ Concepts Capitalization and Obstacles’ Detection
Authors: N. Hammid, F. Bouarab-Dahmani, T. Berkane
Abstract:
Since its appearance, the MOOC (massive open online course) is gaining more and more intention of the educational communities over the world. Apart from the current MOOCs design and purposes, the creators of MOOC focused on the importance of the connection and knowledge exchange between individuals in learning. In this paper, we present a conceptual model for massive open online blended courses where teachers over the world can collaborate and exchange their experience to get a common efficient content designed as a MOOC opened to their students to live a better learning experience. This model is based on disciplines’ concepts capitalization and the detection of the obstacles met by their students when faced with problem situations (exercises, projects, case studies, etc.). This detection is possible by analyzing the frequently of semantic errors committed by the students. The participation of teachers in the design of the course and the attendance by their students can guarantee an efficient and extensive participation (an important number of participants) in the course, the learners’ motivation and the evaluation issues, in the way that the teachers designing the course assess their students. Thus, the teachers review, together with their knowledge, offer a better assessment and efficient connections to their students.Keywords: massive open online course, MOOC, online learning, e-learning
Procedia PDF Downloads 2688017 Implementation and Modeling of a Quadrotor
Authors: Ersan Aktas, Eren Turanoğuz
Abstract:
In this study, the quad-electrical rotor driven unmanned aerial vehicle system is designed and modeled using fundamental dynamic equations. After that, mechanical, electronical and control system of the air vehicle are designed and implemented. Brushless motor speeds are altered via electronic speed controllers in order to achieve desired controllability. The vehicle's fundamental Euler angles (i.e., roll angle, pitch angle, and yaw angle) are obtained via AHRS sensor. These angles are provided as an input to the control algorithm that run on soft the processor on the electronic card. The vehicle control algorithm is implemented in the electronic card. Controller is designed and improved for each Euler angles. Finally, flight tests have been performed to observe and improve the flight characteristics.Keywords: quadrotor, UAS applications, control architectures, PID
Procedia PDF Downloads 3658016 A Firefly Based Optimization Technique for Optimal Planning of Voltage Controlled Distributed Generators
Authors: M. M. Othman, Walid El-Khattam, Y. G. Hegazy, A. Y. Abdelaziz
Abstract:
This paper presents a method for finding the optimal location and capacity of dispatchable DGs connected to the distribution feeders for optimal planning for a specified power loss without violating the system practical constraints. The distributed generation units in the proposed algorithm is modeled as voltage controlled node with the flexibility to be converted to constant power node in case of reactive power limit violation. The proposed algorithm is implemented in MATLAB and tested on the IEEE 37-nodes feeder. The results that are validated by comparing it with results obtained from other competing methods show the effectiveness, accuracy and speed of the proposed method.Keywords: distributed generators, firefly technique, optimization, power loss
Procedia PDF Downloads 5338015 Text Mining Techniques for Prioritizing Pathogenic Mutations in Protein Families Known to Misfold or Aggregate
Authors: Khaleel Saleh Al-Rababah
Abstract:
Amyloid fibril forming regions, which are known as protein aggregates, in sequences of some protein families are associated with a number of diseases known as amyloidosis. Mutations play a role in forming fibrils by accelerating the fibril formation process. In this paper we want to extract diseases that caused by those mutations as a result of the impact of the mutations on structural and functional properties of the aggregated protein. We propose a text mining system, to automatically extract mutations, diseases and relations between mutations and diseases. We presented an algorithm based on finite state to cluster mutations found in the same sentence as a sentence could contain different mutation cause different diseases. Also, we presented a co reference algorithm that enables cross-link sentences.Keywords: amyloid, amyloidosis, co reference, protein, text mining
Procedia PDF Downloads 5258014 Designing of Efficient Polysulphide Reservoirs to Boost the Performance of Li-S Battery
Authors: Sarish Rehman, Kishwar Khan, Yanglong Hou
Abstract:
Among the existed myriad energy-storage technologies, lithium–sulfur batteries (LSBs) show the appealing potential for the ubiquitous growth of next-generation electrical energy storage application, owing to their unparalleled theoretical energy density of 2600 Wh/kg that is over five times larger than that of conventional lithium-ion batteries (LIBs). Despite its significant advances, its large scale implementations are plagued by multitude issues: particularly the intrinsic insulating nature of the sulfur (10-30 S/cm), mechanical degradation of the cathode due to large volume changes of sulfur up to 80 % during cycling and loss of active material (producing polysulfide shuttle effect). We design a unique structure, namely silicon/silica (Si/SiO2) crosslink with hierarchical porous carbon spheres (Si/SiO2@C), and use it as a new and efficient sulfur host to prepare Si/SiO2@C-S hybrid spheres to solve the hurdle of the polysulfides dissolution. As results of intriguing structural advantages developed hybrids spheres, it acts as efficient polysulfides reservoir for enhancing lithium sulfur battery (LSB) in the terms of capacity, rate ability and cycling stability via combined chemical and physical effects.Keywords: high specific surface area, high power density, high content of sulfur, lithium sulfur battery
Procedia PDF Downloads 2298013 A New Approach to the Digital Implementation of Analog Controllers for a Power System Control
Authors: G. Shabib, Esam H. Abd-Elhameed, G. Magdy
Abstract:
In this paper, a comparison of discrete time PID, PSS controllers is presented through small signal stability of power system comprising of one machine connected to infinite bus system. This comparison achieved by using a new approach of discretization which converts the S-domain model of analog controllers to a Z-domain model to enhance the damping of a single machine power system. The new method utilizes the Plant Input Mapping (PIM) algorithm. The proposed algorithm is stable for any sampling rate, as well as it takes the closed loop characteristic into consideration. On the other hand, the traditional discretization methods such as Tustin’s method is produce satisfactory results only; when the sampling period is sufficiently low.Keywords: PSS, power system stabilizer PID, proportional-integral-derivative PIM, plant input mapping
Procedia PDF Downloads 5058012 Urban Rail Transit CBTC Computer Interlocking Subsystem Relying on Multi-Template Pen Point Tracking Algorithm
Authors: Xinli Chen, Xue Su
Abstract:
In the urban rail transit CBTC system, interlocking is considered one of the most basic sys-tems, which has the characteristics of logical complexity and high-security requirements. The development and verification of traditional interlocking subsystems are entirely manual pro-cesses and rely too much on the designer, which often hides many uncertain factors. In order to solve this problem, this article is based on the multi-template nib tracking algorithm for model construction and verification, achieving the main safety attributes and using SCADE for formal verification. Experimental results show that this method helps to improve the quality and efficiency of interlocking software.Keywords: computer interlocking subsystem, penpoint tracking, communication-based train control system, multi-template tip tracking
Procedia PDF Downloads 1608011 Optical Variability of Faint Quasars
Authors: Kassa Endalamaw Rewnu
Abstract:
The variability properties of a quasar sample, spectroscopically complete to magnitude J = 22.0, are investigated on a time baseline of 2 years using three different photometric bands (U, J and F). The original sample was obtained using a combination of different selection criteria: colors, slitless spectroscopy and variability, based on a time baseline of 1 yr. The main goals of this work are two-fold: first, to derive the percentage of variable quasars on a relatively short time baseline; secondly, to search for new quasar candidates missed by the other selection criteria; and, thus, to estimate the completeness of the spectroscopic sample. In order to achieve these goals, we have extracted all the candidate variable objects from a sample of about 1800 stellar or quasi-stellar objects with limiting magnitude J = 22.50 over an area of about 0.50 deg2. We find that > 65% of all the objects selected as possible variables are either confirmed quasars or quasar candidates on the basis of their colors. This percentage increases even further if we exclude from our lists of variable candidates a number of objects equal to that expected on the basis of `contamination' induced by our photometric errors. The percentage of variable quasars in the spectroscopic sample is also high, reaching about 50%. On the basis of these results, we can estimate that the incompleteness of the original spectroscopic sample is < 12%. We conclude that variability analysis of data with small photometric errors can be successfully used as an efficient and independent (or at least auxiliary) selection method in quasar surveys, even when the time baseline is relatively short. Finally, when corrected for the different intrinsic time lags corresponding to a fixed observed time baseline, our data do not show a statistically significant correlation between variability and either absolute luminosity or redshift.Keywords: nuclear activity, galaxies, active quasars, variability
Procedia PDF Downloads 818010 Molecular-Dynamics Study of H₂-C₃H₈-Hydrate Dissociation: Non-Equilibrium Analysis
Authors: Mohammad Reza Ghaani, Niall English
Abstract:
Hydrogen is looked upon as the next-generation clean-energy carrier; the search for an efficient material and method for storing hydrogen has been, and is, pursued relentlessly. Clathrate hydrates are inclusion compounds wherein guest gas molecules like hydrogen are trapped in a host water-lattice framework. These types of materials can be categorised as potentially attractive hosting environments for physical hydrogen storage (i.e., no chemical reaction upon storage). Non-equilibrium molecular dynamics (NEMD) simulations have been performed to investigate thermal-driven break-up of propane-hydrate interfaces with liquid water at 270-300 K, with the propane hydrate containing either one or no hydrogen molecule in each of its small cavities. In addition, two types of hydrate-surface water-lattice molecular termination were adopted, at the hydrate edge with water: a 001-direct surface cleavage and one with completed cages. The geometric hydrate-ice-liquid distinction criteria of Báez and Clancy were employed to distinguish between the hydrate, ice lattices, and liquid-phase. Consequently, the melting temperatures of interface were estimated, and dissociation rates were observed to be strongly dependent on temperature, with higher dissociation rates at larger over-temperatures vis-à-vis melting. The different hydrate-edge terminations for the hydrate-water interface led to statistically-significant differences in the observed melting point and dissociation profile: it was found that the clathrate with the planar interface melts at around 280 K, whilst the melting temperature of the cage-completed interface was determined to be circa 270 K.Keywords: hydrogen storage, clathrate hydrate, molecular dynamics, thermal dissociation
Procedia PDF Downloads 2768009 Efficient Reuse of Exome Sequencing Data for Copy Number Variation Callings
Authors: Chen Wang, Jared Evans, Yan Asmann
Abstract:
With the quick evolvement of next-generation sequencing techniques, whole-exome or exome-panel data have become a cost-effective way for detection of small exonic mutations, but there has been a growing desire to accurately detect copy number variations (CNVs) as well. In order to address this research and clinical needs, we developed a sequencing coverage pattern-based method not only for copy number detections, data integrity checks, CNV calling, and visualization reports. The developed methodologies include complete automation to increase usability, genome content-coverage bias correction, CNV segmentation, data quality reports, and publication quality images. Automatic identification and removal of poor quality outlier samples were made automatically. Multiple experimental batches were routinely detected and further reduced for a clean subset of samples before analysis. Algorithm improvements were also made to improve somatic CNV detection as well as germline CNV detection in trio family. Additionally, a set of utilities was included to facilitate users for producing CNV plots in focused genes of interest. We demonstrate the somatic CNV enhancements by accurately detecting CNVs in whole exome-wide data from the cancer genome atlas cancer samples and a lymphoma case study with paired tumor and normal samples. We also showed our efficient reuses of existing exome sequencing data, for improved germline CNV calling in a family of the trio from the phase-III study of 1000 Genome to detect CNVs with various modes of inheritance. The performance of the developed method is evaluated by comparing CNV calling results with results from other orthogonal copy number platforms. Through our case studies, reuses of exome sequencing data for calling CNVs have several noticeable functionalities, including a better quality control for exome sequencing data, improved joint analysis with single nucleotide variant calls, and novel genomic discovery of under-utilized existing whole exome and custom exome panel data.Keywords: bioinformatics, computational genetics, copy number variations, data reuse, exome sequencing, next generation sequencing
Procedia PDF Downloads 2578008 Care: A Cluster Based Approach for Reliable and Efficient Routing Protocol in Wireless Sensor Networks
Authors: K. Prasanth, S. Hafeezullah Khan, B. Haribalakrishnan, D. Arun, S. Jayapriya, S. Dhivya, N. Vijayarangan
Abstract:
The main goal of our approach is to find the optimum positions for the sensor nodes, reinforcing the communications in points where certain lack of connectivity is found. Routing is the major problem in sensor network’s data transfer between nodes. We are going to provide an efficient routing technique to make data signal transfer to reach the base station soon without any interruption. Clustering and routing are the two important key factors to be considered in case of WSN. To carry out the communication from the nodes to their cluster head, we propose a parameterizable protocol so that the developer can indicate if the routing has to be sensitive to either the link quality of the nodes or the their battery levels.Keywords: clusters, routing, wireless sensor networks, three phases, sensor networks
Procedia PDF Downloads 5058007 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms
Authors: Rikson Gultom
Abstract:
Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.Keywords: abusive language, hate speech, machine learning, optimization, social media
Procedia PDF Downloads 1288006 Single Machine Scheduling Problem to Minimize the Number of Tardy Jobs
Authors: Ali Allahverdi, Harun Aydilek, Asiye Aydilek
Abstract:
Minimizing the number of tardy jobs is an important factor to consider while making scheduling decisions. This is because on-time shipments are vital for lowering cost and increasing customers’ satisfaction. This paper addresses the single machine scheduling problem with the objective of minimizing the number of tardy jobs. The only known information is the lower and upper bounds for processing times, and deterministic job due dates. A dominance relation is established, and an algorithm is proposed. Several heuristics are generated from the proposed algorithm. Computational analysis indicates that the performance of one of the heuristics is very close to the optimal solution, i.e., on average, less than 1.5 % from the optimal solution.Keywords: single machine scheduling, number of tardy jobs, heuristi, lower and upper bounds
Procedia PDF Downloads 5558005 Adomian’s Decomposition Method to Functionally Graded Thermoelastic Materials with Power Law
Authors: Hamdy M. Youssef, Eman A. Al-Lehaibi
Abstract:
This paper presents an iteration method for the numerical solutions of a one-dimensional problem of generalized thermoelasticity with one relaxation time under given initial and boundary conditions. The thermoelastic material with variable properties as a power functional graded has been considered. Adomian’s decomposition techniques have been applied to the governing equations. The numerical results have been calculated by using the iterations method with a certain algorithm. The numerical results have been represented in figures, and the figures affirm that Adomian’s decomposition method is a successful method for modeling thermoelastic problems. Moreover, the empirical parameter of the functional graded, and the lattice design parameter have significant effects on the temperature increment, the strain, the stress, the displacement.Keywords: Adomian, decomposition method, generalized thermoelasticity, algorithm
Procedia PDF Downloads 1438004 Identifying Biomarker Response Patterns to Vitamin D Supplementation in Type 2 Diabetes Using K-means Clustering: A Meta-Analytic Approach to Glycemic and Lipid Profile Modulation
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Background and Aims: This meta-analysis aimed to evaluate the effect of vitamin D supplementation on key metabolic and cardiovascular parameters, such as glycated hemoglobin (HbA1C), fasting blood sugar (FBS), low-density lipoprotein (LDL), high-density lipoprotein (HDL), systolic blood pressure (SBP), and total vitamin D levels in patients with Type 2 diabetes mellitus (T2DM). Methods: A systematic search was performed across databases, including PubMed, Scopus, Embase, Web of Science, Cochrane Library, and ClinicalTrials.gov, from January 1990 to January 2024. A total of 4,177 relevant studies were initially identified. Using an unsupervised K-means clustering algorithm, publications were grouped based on common text features. Maximum entropy classification was then applied to filter studies that matched a pre-identified training set of 139 potentially relevant articles. These selected studies were manually screened for relevance. A parallel manual selection of all initially searched studies was conducted for validation. The final inclusion of studies was based on full-text evaluation, quality assessment, and meta-regression models using random effects. Sensitivity analysis and publication bias assessments were also performed to ensure robustness. Results: The unsupervised K-means clustering algorithm grouped the patients based on their responses to vitamin D supplementation, using key biomarkers such as HbA1C, FBS, LDL, HDL, SBP, and total vitamin D levels. Two primary clusters emerged: one representing patients who experienced significant improvements in these markers and another showing minimal or no change. Patients in the cluster associated with significant improvement exhibited lower HbA1C, FBS, and LDL levels after vitamin D supplementation, while HDL and total vitamin D levels increased. The analysis showed that vitamin D supplementation was particularly effective in reducing HbA1C, FBS, and LDL within this cluster. Furthermore, BMI, weight gain, and disease duration were identified as factors that influenced cluster assignment, with patients having lower BMI and shorter disease duration being more likely to belong to the improvement cluster. Conclusion: The findings of this machine learning-assisted meta-analysis confirm that vitamin D supplementation can significantly improve glycemic control and reduce the risk of cardiovascular complications in T2DM patients. The use of automated screening techniques streamlined the process, ensuring the comprehensive evaluation of a large body of evidence while maintaining the validity of traditional manual review processes.Keywords: HbA1C, T2DM, SBP, FBS
Procedia PDF Downloads 118003 Performance Evaluation of Karanja Oil Based Biodiesel Engine Using Modified Genetic Algorithm
Authors: G. Bhushan, S. Dhingra, K. K. Dubey
Abstract:
This paper presents the evaluation of performance (BSFC and BTE), combustion (Pmax) and emission (CO, NOx, HC and smoke opacity) parameters of karanja biodiesel in a single cylinder, four stroke, direct injection diesel engine by considering significant engine input parameters (blending ratio, compression ratio and load torque). Multi-objective optimization of performance, combustion and emission parameters is also carried out in a karanja biodiesel engine using hybrid RSM-NSGA-II technique. The pareto optimum solutions are predicted by running the hybrid RSM-NSGA-II technique. Each pareto optimal solution is having its own importance. Confirmation tests are also conducted at randomly selected few pareto solutions to check the authenticity of the results.Keywords: genetic algorithm, rsm, biodiesel, karanja
Procedia PDF Downloads 3068002 Machine Learning Assisted Performance Optimization in Memory Tiering
Authors: Derssie Mebratu
Abstract:
As a large variety of micro services, web services, social graphic applications, and media applications are continuously developed, it is substantially vital to design and build a reliable, efficient, and faster memory tiering system. Despite limited design, implementation, and deployment in the last few years, several techniques are currently developed to improve a memory tiering system in a cloud. Some of these techniques are to develop an optimal scanning frequency; improve and track pages movement; identify pages that recently accessed; store pages across each tiering, and then identify pages as a hot, warm, and cold so that hot pages can store in the first tiering Dynamic Random Access Memory (DRAM) and warm pages store in the second tiering Compute Express Link(CXL) and cold pages store in the third tiering Non-Volatile Memory (NVM). Apart from the current proposal and implementation, we also develop a new technique based on a machine learning algorithm in that the throughput produced 25% improved performance compared to the performance produced by the baseline as well as the latency produced 95% improved performance compared to the performance produced by the baseline.Keywords: machine learning, bayesian optimization, memory tiering, CXL, DRAM
Procedia PDF Downloads 968001 Binary Programming for Manufacturing Material and Manufacturing Process Selection Using Genetic Algorithms
Authors: Saleem Z. Ramadan
Abstract:
The material selection problem is concerned with the determination of the right material for a certain product to optimize certain performance indices in that product such as mass, energy density, and power-to-weight ratio. This paper is concerned about optimizing the selection of the manufacturing process along with the material used in the product under performance indices and availability constraints. In this paper, the material selection problem is formulated using binary programming and solved by genetic algorithm. The objective function of the model is to minimize the total manufacturing cost under performance indices and material and manufacturing process availability constraints.Keywords: optimization, material selection, process selection, genetic algorithm
Procedia PDF Downloads 4208000 Buzan Mind Mapping: An Efficient Technique for Note-Taking
Authors: T. K. Tee, M. N. A. Azman, S. Mohamed, M. Muhammad, M. M. Mohamad, J. Md Yunos, M. H. Yee, W. Othman
Abstract:
Buzan mind mapping is an efficient system of note-taking that makes revision a fun thing to do for students. Tony Buzan has been teaching children all over the world for the past thirty years and has proved that mind maps are the magic formula in the classroom for everyone. The purpose of this paper is to discuss the importance of Buzan mind mapping as a note-taking technique for the secondary school students. This paper also examines the mind mapping technique, advantages and disadvantages of hand-drawn mind maps. Samples of students’ mind maps were presented and discussed.Keywords: Buzan mind mapping, note-taking technique, hand-drawn, mind maps
Procedia PDF Downloads 5387999 Prediction of the Lateral Bearing Capacity of Short Piles in Clayey Soils Using Imperialist Competitive Algorithm-Based Artificial Neural Networks
Authors: Reza Dinarvand, Mahdi Sadeghian, Somaye Sadeghian
Abstract:
Prediction of the ultimate bearing capacity of piles (Qu) is one of the basic issues in geotechnical engineering. So far, several methods have been used to estimate Qu, including the recently developed artificial intelligence methods. In recent years, optimization algorithms have been used to minimize artificial network errors, such as colony algorithms, genetic algorithms, imperialist competitive algorithms, and so on. In the present research, artificial neural networks based on colonial competition algorithm (ANN-ICA) were used, and their results were compared with other methods. The results of laboratory tests of short piles in clayey soils with parameters such as pile diameter, pile buried length, eccentricity of load and undrained shear resistance of soil were used for modeling and evaluation. The results showed that ICA-based artificial neural networks predicted lateral bearing capacity of short piles with a correlation coefficient of 0.9865 for training data and 0.975 for test data. Furthermore, the results of the model indicated the superiority of ICA-based artificial neural networks compared to back-propagation artificial neural networks as well as the Broms and Hansen methods.Keywords: artificial neural network, clayey soil, imperialist competition algorithm, lateral bearing capacity, short pile
Procedia PDF Downloads 1527998 A Qualitative Review and Meta-Analyses of Published Literature Exploring Rates and Reasons Behind the Choice of Elective Caesarean Section in Pregnant Women With No Contraindication to Trial of Labor After One Previous Caesarean Section
Authors: Risheka Suthantirakumar, Eilish Pearson, Jacqueline Woodman
Abstract:
Background: Previous research has found a variety of rates and reasons for choosing medically unindicated elective repeat cesarean section (ERCS). Understanding the frequency and reasoning of ERCS, especially when unwarranted, could help healthcare professionals better tailor their advice and service. Therefore, our study conducted meta-analyses and qualitative analyses to identify the reasons and rates worldwide for choosing this procedure over the trial of labor after cesarean (TOLAC), also referred to in published literature as vaginal birth after cesarean (VBAC). Methods: We conducted a systematic review of published literature available on PubMed, EMBASE, and science.gov and conducted a blinded peer review process to assess eligibility. Search terms were created in collaboration with experts in the field. An inclusion and exclusion criteria were established prior to reviewing the articles. Included studies were limited to those published in English due to author constraints, although no international boundaries were used in the search. No time limit for the search was used in order to portray changes over time. Results: Our qualitative analyses found five consistent themes across international studies, which were socioeconomic and cultural differences, previous cesarean experience, perceptions of risk with vaginal birth, patients’ perceptions of future benefits, and medical advice and information. Our meta-analyses found variable rates of ERCS across international borders and within national populations. The average rate across all studies was 44% (CI 95% 36-51). Discussion: The studies included in our qualitative analysis demonstrated similar repetitive themes, which give validity to the findings across the studies included. We consider the rate variation across and within national populations to be partially a result of differing inclusion and eligibility assessment between different studies and argue that a proforma be utilized for future research to be comparable.Keywords: elective cesarean section, VBAC, TOLAC, maternal choice
Procedia PDF Downloads 1117997 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1287996 A Golay Pair Based Synchronization Algorithm for Distributed Multiple-Input Multiple-Output System
Authors: Weizhi Zhong, Xiaoyi Lu, Lei Xu
Abstract:
In order to solve the problem of inaccurate synchronization for distributed multiple-input multiple-output (MIMO) system in multipath environment, a golay pair aided timing synchronization method is proposed in this paper. A new synchronous training sequence based on golay pair is designed. By utilizing the aperiodic auto-correlation complementary property of the new training sequence, the fine timing point is obtained at the receiver. Simulation results show that, compared with the tradition timing synchronization approaches, the proposed algorithm can provide high accuracy in synchronization, especially under multipath condition.Keywords: distributed MIMO system, golay pair, multipath, synchronization
Procedia PDF Downloads 2477995 Digital Material Characterization Using the Quantum Fourier Transform
Authors: Felix Givois, Nicolas R. Gauger, Matthias Kabel
Abstract:
The efficient digital material characterization is of great interest to many fields of application. It consists of the following three steps. First, a 3D reconstruction of 2D scans must be performed. Then, the resulting gray-value image of the material sample is enhanced by image processing methods. Finally, partial differential equations (PDE) are solved on the segmented image, and by averaging the resulting solutions fields, effective properties like stiffness or conductivity can be computed. Due to the high resolution of current CT images, the latter is typically performed with matrix-free solvers. Among them, a solver that uses the explicit formula of the Green-Eshelby operator in Fourier space has been proposed by Moulinec and Suquet. Its algorithmic, most complex part is the Fast Fourier Transformation (FFT). In our talk, we will discuss the potential quantum advantage that can be obtained by replacing the FFT with the Quantum Fourier Transformation (QFT). We will especially show that the data transfer for noisy intermediate-scale quantum (NISQ) devices can be improved by using appropriate boundary conditions for the PDE, which also allows using semi-classical versions of the QFT. In the end, we will compare the results of the QFT-based algorithm for simple geometries with the results of the FFT-based homogenization method.Keywords: most likelihood amplitude estimation (MLQAE), numerical homogenization, quantum Fourier transformation (QFT), NISQ devises
Procedia PDF Downloads 787994 Research on Energy Field Intervening in Lost Space Renewal Strategy
Authors: Tianyue Wan
Abstract:
Lost space is the space that has not been used for a long time and is in decline, proposed by Roger Trancik. And in his book Finding Lost Space: Theories of Urban Design, the concept of lost space is defined as those anti-traditional spaces that are unpleasant, need to be redesigned, and have no benefit to the environment and users. They have no defined boundaries and do not connect the various landscape elements in a coherent way. With the rapid development of urbanization in China, the blind areas of urban renewal have become a chaotic lost space that is incompatible with the rapid development of urbanization. Therefore, lost space needs to be reconstructed urgently under the background of infill development and reduction planning in China. The formation of lost space is also an invisible division of social hierarchy. This paper tries to break down the social class division and the estrangement between people through the regeneration of lost space. Ultimately, it will enhance vitality, rebuild a sense of belonging, and create a continuous open public space for local people. Based on the concept of lost space and energy field, this paper clarifies the significance of the energy field in the lost space renovation. Then it introduces the energy field into lost space by using the magnetic field in physics as a prototype. The construction of the energy field is support by space theory, spatial morphology analysis theory, public communication theory, urban diversity theory and city image theory. Taking Wuhan’s Lingjiao Park of China as an example, this paper chooses the lost space on the west side of the park as the research object. According to the current situation of this site, the energy intervention strategies are proposed from four aspects: natural ecology, space rights, intangible cultural heritage and infrastructure configuration. And six specific lost space renewal methods are used in this work, including “riveting”, “breakthrough”, “radiation”, “inheritance”, “connection” and “intersection”. After the renovation, space will be re-introduced into the active crow. The integration of activities and space creates a sense of place, improve the walking experience, restores the vitality of the space, and provides a reference for the reconstruction of lost space in the city.Keywords: dynamic vitality intervention, lost space, space vitality, sense of place
Procedia PDF Downloads 1127993 Multi-Objective Optimization of an Aerodynamic Feeding System Using Genetic Algorithm
Authors: Jan Busch, Peter Nyhuis
Abstract:
Considering the challenges of short product life cycles and growing variant diversity, cost minimization and manufacturing flexibility increasingly gain importance to maintain a competitive edge in today’s global and dynamic markets. In this context, an aerodynamic part feeding system for high-speed industrial assembly applications has been developed at the Institute of Production Systems and Logistics (IFA), Leibniz Universitaet Hannover. The aerodynamic part feeding system outperforms conventional systems with respect to its process safety, reliability, and operating speed. In this paper, a multi-objective optimisation of the aerodynamic feeding system regarding the orientation rate, the feeding velocity and the required nozzle pressure is presented.Keywords: aerodynamic feeding system, genetic algorithm, multi-objective optimization, workpiece orientation
Procedia PDF Downloads 5777992 Optimization of Technical and Technological Solutions for the Development of Offshore Hydrocarbon Fields in the Kaliningrad Region
Authors: Pavel Shcherban, Viktoria Ivanova, Alexander Neprokin, Vladislav Golovanov
Abstract:
Currently, LLC «Lukoil-Kaliningradmorneft» is implementing a comprehensive program for the development of offshore fields of the Kaliningrad region. This is largely associated with the depletion of the resource base of land in the region, as well as the positive results of geological investigation surrounding the Baltic Sea area and the data on the volume of hydrocarbon recovery from a single offshore field are working on the Kaliningrad region – D-6 «Kravtsovskoye».The article analyzes the main stages of the LLC «Lukoil-Kaliningradmorneft»’s development program for the development of the hydrocarbon resources of the region's shelf and suggests an optimization algorithm that allows managing a multi-criteria process of development of shelf deposits. The algorithm is formed on the basis of the problem of sequential decision making, which is a section of dynamic programming. Application of the algorithm during the consolidation of the initial data, the elaboration of project documentation, the further exploration and development of offshore fields will allow to optimize the complex of technical and technological solutions and increase the economic efficiency of the field development project implemented by LLC «Lukoil-Kaliningradmorneft».Keywords: offshore fields of hydrocarbons of the Baltic Sea, development of offshore oil and gas fields, optimization of the field development scheme, solution of multicriteria tasks in oil and gas complex, quality management in oil and gas complex
Procedia PDF Downloads 2007991 Enhancement of Density-Based Spatial Clustering Algorithm with Noise for Fire Risk Assessment and Warning in Metro Manila
Authors: Pinky Mae O. De Leon, Franchezka S. P. Flores
Abstract:
This study focuses on applying an enhanced density-based spatial clustering algorithm with noise for fire risk assessments and warnings in Metro Manila. Unlike other clustering algorithms, DBSCAN is known for its ability to identify arbitrary-shaped clusters and its resistance to noise. However, its performance diminishes when handling high dimensional data, wherein it can read the noise points as relevant data points. Also, the algorithm is dependent on the parameters (eps & minPts) set by the user; choosing the wrong parameters can greatly affect its clustering result. To overcome these challenges, the study proposes three key enhancements: first is to utilize multiple MinHash and locality-sensitive hashing to decrease the dimensionality of the data set, second is to implement Jaccard Similarity before applying the parameter Epsilon to ensure that only similar data points are considered neighbors, and third is to use the concept of Jaccard Neighborhood along with the parameter MinPts to improve in classifying core points and identifying noise in the data set. The results show that the modified DBSCAN algorithm outperformed three other clustering methods, achieving fewer outliers, which facilitated a clearer identification of fire-prone areas, high Silhouette score, indicating well-separated clusters that distinctly identify areas with potential fire hazards and exceptionally achieved a low Davies-Bouldin Index and a high Calinski-Harabasz score, highlighting its ability to form compact and well-defined clusters, making it an effective tool for assessing fire hazard zones. This study is intended for assessing areas in Metro Manila that are most prone to fire risk.Keywords: DBSCAN, clustering, Jaccard similarity, MinHash LSH, fires
Procedia PDF Downloads 1