Search results for: nonlinear statistical techniques
10427 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 7510426 Exploring the Techniques of Achieving Structural Electrical Continuity for Gas Plant Facilities
Authors: Abdulmohsen Alghadeer, Fahad Al Mahashir, Loai Al Owa, Najim Alshahrani
Abstract:
Electrical continuity of steel structure members is an essential condition to ensure equipotential and ultimately to protect personnel and assets in industrial facilities. The steel structure is electrically connected to provide a low resistance path to earth through equipotential bonding to prevent sparks and fires in the event of fault currents and avoid malfunction of the plant with detrimental consequences to the local and global environment. The oil and gas industry is commonly establishing steel structure electrical continuity by bare surface connection of coated steel members. This paper presents information pertaining to a real case of exploring and applying different techniques to achieve the electrical continuity in erecting steel structures at a gas plant facility. A project was supplied with fully coated steel members even at the surface connection members that cause electrical discontinuity. This was observed while a considerable number of steel members had already been received at the job site and erected. This made the resolution of the case to use different techniques such as bolt tightening and torqueing, chemical paint stripping and single point jumpers. These techniques are studied with comparative analysis related to their applicability, workability, time and cost advantages and disadvantages.Keywords: coated Steel, electrical continuity, equipotential bonding, galvanized steel, gas plant facility, lightning protection, steel structure
Procedia PDF Downloads 12810425 Credit Risk Assessment Using Rule Based Classifiers: A Comparative Study
Authors: Salima Smiti, Ines Gasmi, Makram Soui
Abstract:
Credit risk is the most important issue for financial institutions. Its assessment becomes an important task used to predict defaulter customers and classify customers as good or bad payers. To this objective, numerous techniques have been applied for credit risk assessment. However, to our knowledge, several evaluation techniques are black-box models such as neural networks, SVM, etc. They generate applicants’ classes without any explanation. In this paper, we propose to assess credit risk using rules classification method. Our output is a set of rules which describe and explain the decision. To this end, we will compare seven classification algorithms (JRip, Decision Table, OneR, ZeroR, Fuzzy Rule, PART and Genetic programming (GP)) where the goal is to find the best rules satisfying many criteria: accuracy, sensitivity, and specificity. The obtained results confirm the efficiency of the GP algorithm for German and Australian datasets compared to other rule-based techniques to predict the credit risk.Keywords: credit risk assessment, classification algorithms, data mining, rule extraction
Procedia PDF Downloads 18110424 Testing and Validation Stochastic Models in Epidemiology
Authors: Snigdha Sahai, Devaki Chikkavenkatappa Yellappa
Abstract:
This study outlines approaches for testing and validating stochastic models used in epidemiology, focusing on the integration and functional testing of simulation code. It details methods for combining simple functions into comprehensive simulations, distinguishing between deterministic and stochastic components, and applying tests to ensure robustness. Techniques include isolating stochastic elements, utilizing large sample sizes for validation, and handling special cases. Practical examples are provided using R code to demonstrate integration testing, handling of incorrect inputs, and special cases. The study emphasizes the importance of both functional and defensive programming to enhance code reliability and user-friendliness.Keywords: computational epidemiology, epidemiology, public health, infectious disease modeling, statistical analysis, health data analysis, disease transmission dynamics, predictive modeling in health, population health modeling, quantitative public health, random sampling simulations, randomized numerical analysis, simulation-based analysis, variance-based simulations, algorithmic disease simulation, computational public health strategies, epidemiological surveillance, disease pattern analysis, epidemic risk assessment, population-based health strategies, preventive healthcare models, infection dynamics in populations, contagion spread prediction models, survival analysis techniques, epidemiological data mining, host-pathogen interaction models, risk assessment algorithms for disease spread, decision-support systems in epidemiology, macro-level health impact simulations, socioeconomic determinants in disease spread, data-driven decision making in public health, quantitative impact assessment of health policies, biostatistical methods in population health, probability-driven health outcome predictions
Procedia PDF Downloads 810423 A Variant of Newton's Method with Free Second-Order Derivative
Authors: Young Hee Geum
Abstract:
In this paper, we present the iterative method and determine the control parameters to converge cubically for solving nonlinear equations. In addition, we derive the asymptotic error constant.Keywords: asymptotic error constant, iterative method, multiple root, root-finding, order of convergent
Procedia PDF Downloads 29410422 Surface Roughness Modeling in Dry Face Milling of Annealed and Hardened AISI 52100 Steel
Authors: Mohieddine Benghersallah, Mohamed Zakaria Zahaf, Ali Medjber, Idriss Tibakh
Abstract:
The objective of this study is to analyse the effects of cutting parameters on surface roughness in dry face milling using statistical techniques. We studied the effect of the microstructure of AISI 52100 steel on machinability before and after hardening. The machining tests were carried out on a high rigidity vertical milling machine with a 25 mm diameter face milling cutter equipped with micro-grain bicarbide inserts with PVD (Ti, AlN) coating in GC1030 grade. A Taguchi L9 experiment plan is adopted. Analysis of variance (ANOVA) was used to determine the effects of cutting parameters (Vc, fz, ap) on the roughness (Ra) of the machined surface. Regression analysis to assess the machinability of steel presented mathematical models of roughness and the combination of parameters to minimize it. The recorded results show that feed per tooth has the most significant effect on the surface condition for both steel treatment conditions. The best roughnesses were obtained for the hardened AISI 52100 steel.Keywords: machinability, heat treatment, microstructure, surface roughness, Taguchi method
Procedia PDF Downloads 14710421 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms
Authors: Naina Mahajan, Bikram Pal Kaur
Abstract:
The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool
Procedia PDF Downloads 33810420 On the convergence of the Mixed Integer Randomized Pattern Search Algorithm
Authors: Ebert Brea
Abstract:
We propose a novel direct search algorithm for identifying at least a local minimum of mixed integer nonlinear unconstrained optimization problems. The Mixed Integer Randomized Pattern Search Algorithm (MIRPSA), so-called by the author, is based on a randomized pattern search, which is modified by the MIRPSA for finding at least a local minimum of our problem. The MIRPSA has two main operations over the randomized pattern search: moving operation and shrinking operation. Each operation is carried out by the algorithm when a set of conditions is held. The convergence properties of the MIRPSA is analyzed using a Markov chain approach, which is represented by an infinite countable set of state space λ, where each state d(q) is defined by a measure of the qth randomized pattern search Hq, for all q in N. According to the algorithm, when a moving operation is carried out on the qth randomized pattern search Hq, the MIRPSA holds its state. Meanwhile, if the MIRPSA carries out a shrinking operation over the qth randomized pattern search Hq, the algorithm will visit the next state, this is, a shrinking operation at the qth state causes a changing of the qth state into (q+1)th state. It is worthwhile pointing out that the MIRPSA never goes back to any visited states because the MIRPSA only visits any qth by shrinking operations. In this article, we describe the MIRPSA for mixed integer nonlinear unconstrained optimization problems for doing a deep study of its convergence properties using Markov chain viewpoint. We herein include a low dimension case for showing more details of the MIRPSA, when the algorithm is used for identifying the minimum of a mixed integer quadratic function. Besides, numerical examples are also shown in order to measure the performance of the MIRPSA.Keywords: direct search, mixed integer optimization, random search, convergence, Markov chain
Procedia PDF Downloads 47010419 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 18810418 A Developmental Survey of Local Stereo Matching Algorithms
Authors: André Smith, Amr Abdel-Dayem
Abstract:
This paper presents an overview of the history and development of stereo matching algorithms. Details from its inception, up to relatively recent techniques are described, noting challenges that have been surmounted across these past decades. Different components of these are explored, though focus is directed towards the local matching techniques. While global approaches have existed for some time, and demonstrated greater accuracy than their counterparts, they are generally quite slow. Many strides have been made more recently, allowing local methods to catch up in terms of accuracy, without sacrificing the overall performance.Keywords: developmental survey, local stereo matching, rectification, stereo correspondence
Procedia PDF Downloads 29310417 Some Inequalities Related with Starlike Log-Harmonic Mappings
Authors: Melike Aydoğan, Dürdane Öztürk
Abstract:
Let H(D) be the linear space of all analytic functions defined on the open unit disc. A log-harmonic mappings is a solution of the nonlinear elliptic partial differential equation where w(z) ∈ H(D) is second dilatation such that |w(z)| < 1 for all z ∈ D. The aim of this paper is to define some inequalities of starlike logharmonic functions of order α(0 ≤ α ≤ 1).Keywords: starlike log-harmonic functions, univalent functions, distortion theorem
Procedia PDF Downloads 52610416 Seismic Performance of Concrete Moment Resisting Frames in Western Canada
Authors: Ali Naghshineh, Ashutosh Bagchi
Abstract:
Performance-based seismic design concepts are increasingly being adopted in various jurisdictions. While the National Building Code of Canada (NBCC) is not fully performance-based, it provides some features of a performance-based code, such as displacement control and objective-based solutions. Performance evaluation is an important part of a performance-based design. In this paper, the seismic performance of a set of code-designed 4, 8 and 12 story moment resisting concrete frames located in Victoria, BC, in the western part of Canada at different hazard levels namely, SLE (Service Level Event), DLE (Design Level Event) and MCE (Maximum Considered Event) has been studied. The seismic performance of these buildings has been evaluated based on FEMA 356 and ATC 72 procedures, and the nonlinear time history analysis. Pushover analysis has been used to investigate the different performance levels of these buildings and adjust their design based on the corresponding target displacements. Since pushover analysis ignores the higher mode effects, nonlinear dynamic time history using a set of ground motion records has been performed. Different types of ground motion records, such as crustal and subduction earthquake records have been used for the dynamic analysis to determine their effects. Results obtained from push over analysis on inter-story drift, displacement, shear and overturning moment are compared to those from the dynamic analysis.Keywords: seismic performance., performance-based design, concrete moment resisting frame, crustal earthquakes, subduction earthquakes
Procedia PDF Downloads 26410415 Advanced Statistical Approaches for Identifying Predictors of Poor Blood Pressure Control: A Comprehensive Analysis Using Multivariable Logistic Regression and Generalized Estimating Equations (GEE)
Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei
Abstract:
Effective management of hypertension remains a critical public health challenge, particularly among racially and ethnically diverse populations. This study employs sophisticated statistical models to rigorously investigate the predictors of poor blood pressure (BP) control, with a specific focus on demographic, socioeconomic, and clinical risk factors. Leveraging a large sample of 19,253 adults drawn from the National Health and Nutrition Examination Survey (NHANES) across three distinct time periods (2013-2014, 2015-2016, and 2017-2020), we applied multivariable logistic regression and generalized estimating equations (GEE) to account for the clustered structure of the data and potential within-subject correlations. Our multivariable models identified significant associations between poor BP control and several key predictors, including race/ethnicity, age, gender, body mass index (BMI), prevalent diabetes, and chronic kidney disease (CKD). Non-Hispanic Black individuals consistently exhibited higher odds of poor BP control across all periods (OR = 1.99; 95% CI: 1.69, 2.36 for the overall sample; OR = 2.33; 95% CI: 1.79, 3.02 for 2017-2020). Younger age groups demonstrated substantially lower odds of poor BP control compared to individuals aged 75 and older (OR = 0.15; 95% CI: 0.11, 0.20 for ages 18-44). Men also had a higher likelihood of poor BP control relative to women (OR = 1.55; 95% CI: 1.31, 1.82), while BMI ≥35 kg/m² (OR = 1.76; 95% CI: 1.40, 2.20) and the presence of diabetes (OR = 2.20; 95% CI: 1.80, 2.68) were associated with increased odds of poor BP management. Further analysis using GEE models, accounting for temporal correlations and repeated measures, confirmed the robustness of these findings. Notably, individuals with chronic kidney disease displayed markedly elevated odds of poor BP control (OR = 3.72; 95% CI: 3.09, 4.48), with significant differences across the survey periods. Additionally, higher education levels and better self-reported diet quality were associated with improved BP control. College graduates exhibited a reduced likelihood of poor BP control (OR = 0.64; 95% CI: 0.46, 0.89), particularly in the 2015-2016 period (OR = 0.48; 95% CI: 0.28, 0.84). Similarly, excellent dietary habits were associated with significantly lower odds of poor BP control (OR = 0.64; 95% CI: 0.44, 0.94), underscoring the importance of lifestyle factors in hypertension management. In conclusion, our findings provide compelling evidence of the complex interplay between demographic, clinical, and socioeconomic factors in predicting poor BP control. The application of advanced statistical techniques such as GEE enhances the reliability of these results by addressing the correlated nature of repeated observations. This study highlights the need for targeted interventions that consider racial/ethnic disparities, clinical comorbidities, and lifestyle modifications in improving BP control outcomes.Keywords: hypertension, blood pressure, NHANES, generalized estimating equations
Procedia PDF Downloads 1210414 Fat-Tail Test of Regulatory DNA Sequences
Authors: Jian-Jun Shu
Abstract:
The statistical properties of CRMs are explored by estimating similar-word set occurrence distribution. It is observed that CRMs tend to have a fat-tail distribution for similar-word set occurrence. Thus, the fat-tail test with two fatness coefficients is proposed to distinguish CRMs from non-CRMs, especially from exons. For the first fatness coefficient, the separation accuracy between CRMs and exons is increased as compared with the existing content-based CRM prediction method – fluffy-tail test. For the second fatness coefficient, the computing time is reduced as compared with fluffy-tail test, making it very suitable for long sequences and large data-base analysis in the post-genome time. Moreover, these indexes may be used to predict the CRMs which have not yet been observed experimentally. This can serve as a valuable filtering process for experiment.Keywords: statistical approach, transcription factor binding sites, cis-regulatory modules, DNA sequences
Procedia PDF Downloads 29010413 Reducing Power Consumption in Network on Chip Using Scramble Techniques
Authors: Vinayaga Jagadessh Raja, R. Ganesan, S. Ramesh Kumar
Abstract:
An ever more significant fraction of the overall power dissipation of a network-on-chip (NoC) based system on- chip (SoC) is due to the interconnection scheme. In information, as equipment shrinks, the power contributes of NoC links starts to compete with that of NoC routers. In this paper, we propose the use of clock gating in the data encoding techniques as a viable way to reduce both power dissipation and time consumption of NoC links. The projected scramble scheme exploits the wormhole switching techniques. That is, flits are scramble by the network interface (NI) before they are injected in the network and are decoded by the target NI. This makes the scheme transparent to the underlying network since the encoder and decoder logic is integrated in the NI and no modification of the routers structural design is required. We review the projected scramble scheme on a set of representative data streams (both synthetic and extracted from real applications) showing that it is possible to reduce the power contribution of both the self-switching activity and the coupling switching activity in inter-routers links.Keywords: Xilinx 12.1, power consumption, Encoder, NOC
Procedia PDF Downloads 40010412 Classification on Statistical Distributions of a Complex N-Body System
Authors: David C. Ni
Abstract:
Contemporary models for N-body systems are based on temporal, two-body, and mass point representation of Newtonian mechanics. Other mainstream models include 2D and 3D Ising models based on local neighborhood the lattice structures. In Quantum mechanics, the theories of collective modes are for superconductivity and for the long-range quantum entanglement. However, these models are still mainly for the specific phenomena with a set of designated parameters. We are therefore motivated to develop a new construction directly from the complex-variable N-body systems based on the extended Blaschke functions (EBF), which represent a non-temporal and nonlinear extension of Lorentz transformation on the complex plane – the normalized momentum spaces. A point on the complex plane represents a normalized state of particle momentums observed from a reference frame in the theory of special relativity. There are only two key parameters, normalized momentum and nonlinearity for modelling. An algorithm similar to Jenkins-Traub method is adopted for solving EBF iteratively. Through iteration, the solution sets show a form of σ + i [-t, t], where σ and t are the real numbers, and the [-t, t] shows various distributions, such as 1-peak, 2-peak, and 3-peak etc. distributions and some of them are analog to the canonical distributions. The results of the numerical analysis demonstrate continuum-to-discreteness transitions, evolutional invariance of distributions, phase transitions with conjugate symmetry, etc., which manifest the construction as a potential candidate for the unification of statistics. We hereby classify the observed distributions on the finite convergent domains. Continuous and discrete distributions both exist and are predictable for given partitions in different regions of parameter-pair. We further compare these distributions with canonical distributions and address the impacts on the existing applications.Keywords: blaschke, lorentz transformation, complex variables, continuous, discrete, canonical, classification
Procedia PDF Downloads 30910411 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System
Authors: Ying-Pin Chang
Abstract:
This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers
Procedia PDF Downloads 39210410 Study of the Efficiency of a Synthetic Wax for Corrosion Protection of Steel in Aggressive Environments
Authors: Laidi Babouri
Abstract:
The remarkable properties of steel, such as hardness and impact resistance, motivate their use in the automotive manufacturing industry. However, due to the very vulnerable environmental conditions of use, the steel that makes up the car body can corrode. This situation is motivating more and more automobile manufacturers to develop research to develop processes minimizing the rate of degradation of the physicomechanical properties of these materials. The present work falls within this perspective; it presents the results of a research study focused on the use of synthetic wax for the protection of steel, type XES (DC04), against corrosion in aggressive environments. The media used in this study are an acid medium with a pH=5.6, a 3% chloride medium, and a dry medium. Evaluation of the protective power of synthetic wax in different environments was carried out using mass loss techniques (immersion), completed by electrochemical techniques (stationary and transient). The results of the immersion of the steel samples, with a surface area of (1.44 cm²), in the various media, for a period of 30 days, using the immersion technique, showed high protective efficiency of synthetic wax in acidic and saline environments, with a lesser degree in a dry environment. Moreover, the study of the protective power, using electrochemical techniques, confirmed the results obtained in static mode (loss of mass), the protective efficiency of synthetic wax, against the corrosion of steel, in different environments, which reaches a maximum rate of 99.87% in a saline environment.Keywords: corrosion, steel, industrial wax, environment, mass loss, electrochemical techniques
Procedia PDF Downloads 7610409 Application of Artificial Neural Network Technique for Diagnosing Asthma
Authors: Azadeh Bashiri
Abstract:
Introduction: Lack of proper diagnosis and inadequate treatment of asthma leads to physical and financial complications. This study aimed to use data mining techniques and creating a neural network intelligent system for diagnosis of asthma. Methods: The study population is the patients who had visited one of the Lung Clinics in Tehran. Data were analyzed using the SPSS statistical tool and the chi-square Pearson's coefficient was the basis of decision making for data ranking. The considered neural network is trained using back propagation learning technique. Results: According to the analysis performed by means of SPSS to select the top factors, 13 effective factors were selected, in different performances, data was mixed in various forms, so the different models were made for training the data and testing networks and in all different modes, the network was able to predict correctly 100% of all cases. Conclusion: Using data mining methods before the design structure of system, aimed to reduce the data dimension and the optimum choice of the data, will lead to a more accurate system. Therefore, considering the data mining approaches due to the nature of medical data is necessary.Keywords: asthma, data mining, Artificial Neural Network, intelligent system
Procedia PDF Downloads 27310408 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load
Authors: David Koren, Vojko Kilar
Abstract:
The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab. The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.Keywords: extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction
Procedia PDF Downloads 30210407 Techniques for Seismic Strengthening of Historical Monuments from Diagnosis to Implementation
Authors: Mircan Kaya
Abstract:
A multi-disciplinary approach is required in any intervention project for historical monuments. Due to the complexity of their geometry, the variable and unpredictable characteristics of original materials used in their creation, heritage structures are peculiar. Their histories are often complex, and they require correct diagnoses to decide on the techniques of intervention. This approach should not only combine technical aspects but also historical research that may help discover phenomena involving structural issues, and acquire a knowledge of the structure on its concept, method of construction, previous interventions, process of damage, and its current state. In addition to the traditional techniques like bed joint reinforcement, the repairing, strengthening and restoration of historical buildings may require several other modern methods which may be described as innovative techniques like pre-stressing and post-tensioning, use of shape memory alloy devices and shock transmission units, shoring, drilling, and the use of stainless steel or titanium. Regardless of the method to be incorporated in the strengthening process, which can be traditional or innovative, it is crucial to recognize that structural strengthening is the process of upgrading the structural system of the existing building with the aim of improving its performance under existing and additional loads like seismic loads. This process is much more complex than dealing with a new construction, owing to the fact that there are several unknown factors associated with the structural system. Material properties, load paths, previous interventions, existing reinforcement are especially important matters to be considered. There are several examples of seismic strengthening with traditional and innovative techniques around the world, which will be discussed in this paper in detail, including their pros and cons. Ultimately, however, the main idea underlying the philosophy of a successful intervention with the most appropriate techniques of strengthening a historic monument should be decided by a proper assessment of the specific needs of the building.Keywords: bed joint reinforcement, historical monuments, post-tensioning, pre-stressing, seismic strengthening, shape memory alloy devices, shock transmitters, tie rods
Procedia PDF Downloads 26510406 Identification of Promising Infant Clusters to Obtain Improved Block Layout Designs
Authors: Mustahsan Mir, Ahmed Hassanin, Mohammed A. Al-Saleh
Abstract:
The layout optimization of building blocks of unequal areas has applications in many disciplines including VLSI floorplanning, macrocell placement, unequal-area facilities layout optimization, and plant or machine layout design. A number of heuristics and some analytical and hybrid techniques have been published to solve this problem. This paper presents an efficient high-quality building-block layout design technique especially suited for solving large-size problems. The higher efficiency and improved quality of optimized solutions are made possible by introducing the concept of Promising Infant Clusters in a constructive placement procedure. The results presented in the paper demonstrate the improved performance of the presented technique for benchmark problems in comparison with published heuristic, analytic, and hybrid techniques.Keywords: block layout problem, building-block layout design, CAD, optimization, search techniques
Procedia PDF Downloads 38610405 GIS for Simulating Air Traffic by Applying Different Multi-radar Positioning Techniques
Authors: Amara Rafik, Bougherara Maamar, Belhadj Aissa Mostefa
Abstract:
Radar data is one of the many data sources used by ATM Air Traffic Management systems. These data come from air navigation radar antennas. These radars intercept signals emitted by the various aircraft crossing the controlled airspace and calculate the position of these aircraft and retransmit their positions to the Air Traffic Management System. For greater reliability, these radars are positioned in such a way as to allow their coverage areas to overlap. An aircraft will therefore be detected by at least one of these radars. However, the position coordinates of the same aircraft and sent by these different radars are not necessarily identical. Therefore, the ATM system must calculate a single position (radar track) which will ultimately be sent to the control position and displayed on the air traffic controller's monitor. There are several techniques for calculating the radar track. Furthermore, the geographical nature of the problem requires the use of a Geographic Information System (GIS), i.e. a geographical database on the one hand and geographical processing. The objective of this work is to propose a GIS for traffic simulation which reconstructs the evolution over time of aircraft positions from a multi-source radar data set and by applying these different techniques.Keywords: ATM, GIS, radar data, air traffic simulation
Procedia PDF Downloads 8610404 Comparison of Bioelectric and Biomechanical Electromyography Normalization Techniques in Disparate Populations
Authors: Drew Commandeur, Ryan Brodie, Sandra Hundza, Marc Klimstra
Abstract:
The amplitude of raw electromyography (EMG) is affected by recording conditions and often requires normalization to make meaningful comparisons. Bioelectric methods normalize with an EMG signal recorded during a standardized task or from the experimental protocol itself, while biomechanical methods often involve measurements with an additional sensor such as a force transducer. Common bioelectric normalization techniques for treadmill walking include maximum voluntary isometric contraction (MVIC), dynamic EMG peak (EMGPeak) or dynamic EMG mean (EMGMean). There are several concerns with using MVICs to normalize EMG, including poor reliability and potential discomfort. A limitation of bioelectric normalization techniques is that they could result in a misrepresentation of the absolute magnitude of force generated by the muscle and impact the interpretation of EMG between functionally disparate groups. Additionally, methods that normalize to EMG recorded during the task may eliminate some real inter-individual variability due to biological variation. This study compared biomechanical and bioelectric EMG normalization techniques during treadmill walking to assess the impact of the normalization method on the functional interpretation of EMG data. For the biomechanical method, we normalized EMG to a target torque (EMGTS) and the bioelectric methods used were normalization to the mean and peak of the signal during the walking task (EMGMean and EMGPeak). The effect of normalization on muscle activation pattern, EMG amplitude, and inter-individual variability were compared between disparate cohorts of OLD (76.6 yrs N=11) and YOUNG (26.6 yrs N=11) adults. Participants walked on a treadmill at a self-selected pace while EMG was recorded from the right lower limb. EMG data from the soleus (SOL), medial gastrocnemius (MG), tibialis anterior (TA), vastus lateralis (VL), and biceps femoris (BF) were phase averaged into 16 bins (phases) representing the gait cycle with bins 1-10 associated with right stance and bins 11-16 with right swing. Pearson’s correlations showed that activation patterns across the gait cycle were similar between all methods, ranging from r =0.86 to r=1.00 with p<0.05. This indicates that each method can characterize the muscle activation pattern during walking. Repeated measures ANOVA showed a main effect for age in MG for EMGPeak but no other main effects were observed. Interactions between age*phase of EMG amplitude between YOUNG and OLD with each method resulted in different statistical interpretation between methods. EMGTS normalization characterized the fewest differences (four phases across all 5 muscles) while EMGMean (11 phases) and EMGPeak (19 phases) showed considerably more differences between cohorts. The second notable finding was that coefficient of variation, the representation of inter-individual variability, was greatest for EMGTS and lowest for EMGMean while EMGPeak was slightly higher than EMGMean for all muscles. This finding supports our expectation that EMGTS normalization would retain inter-individual variability which may be desirable, however, it also suggests that even when large differences are expected, a larger sample size may be required to observe the differences. Our findings clearly indicate that interpretation of EMG is highly dependent on the normalization method used, and it is essential to consider the strengths and limitations of each method when drawing conclusions.Keywords: electromyography, EMG normalization, functional EMG, older adults
Procedia PDF Downloads 9110403 A Framework for ERP Project Evaluation Based on BSC Model: A Study in Iran
Authors: Mohammad Reza Ostad Ali Naghi Kashani, Esfanji Elia
Abstract:
Nowadays, the amounts of companies which tend to have an Enterprise Resource Planning (ERP) application are increasing particularly in developing countries like Iran. ERP projects are expensive, time consuming, and complex, in addition the failure rate is high among these projects. It is important to know whether these projects could meet their goals or not. Furthermore, the area which should be improved should be identified. In this paper we made a framework to evaluate ERP projects success implementation. First, based on literature review we made a framework based on BSC model, financial, customer, processes, learning and knowledge, because of the importance of change management it was added to model. Then an organization was divided in three layers. We choose corporate, managerial, and operational levels. Then to find criteria to assess each aspect, we use Delphi method in two rounds. And for the second round we made a questionnaire and did some statistical tasks on them. Based on the statistical results some of them are accepted and others are rejected.Keywords: ERP, BSC, ERP project evaluation, IT projects
Procedia PDF Downloads 32210402 Experimental Investigation of On-Body Channel Modelling at 2.45 GHz
Authors: Hasliza A. Rahim, Fareq Malek, Nur A. M. Affendi, Azuwa Ali, Norshafinash Saudin, Latifah Mohamed
Abstract:
This paper presents the experimental investigation of on-body channel fading at 2.45 GHz considering two effects of the user body movement; stationary and mobile. A pair of body-worn antennas was utilized in this measurement campaign. A statistical analysis was performed by comparing the measured on-body path loss to five well-known distributions; lognormal, normal, Nakagami, Weibull and Rayleigh. The results showed that the average path loss of moving arm varied higher than the path loss in sitting position for upper-arm-to-left-chest link, up to 3.5 dB. The analysis also concluded that the Nakagami distribution provided the best fit for most of on-body static link path loss in standing still and sitting position, while the arm movement can be best described by log-normal distribution.Keywords: on-body channel communications, fading characteristics, statistical model, body movement
Procedia PDF Downloads 35510401 Self –Engineering Strategy of Six Dimensional Inter-Subcultural Mental Images
Authors: Mostafa Jafari
Abstract:
How the people continually create and recreate the six dimensional inter- sub-cultural relationships from the strategic point of view? Can they engineer and direct it toward creating a set of peaceful subcultures? This paper answers to these questions. Our mental images shape the quantity and quality of our relationships. The six dimensions of mental images are: my mental image about myself, your mental image about yourself, my mental image about you, your mental image about me, my imagination about your image about me and your imagination about my mental image about you. Strategic engineering is dynamically shaping these images and imaginations.Methodology: This survey, which is based on object and the relation between the variables, is explanatory, correlative and quantitative. The target community members are 90 educated people from universities. The data has been collected through questionnaire and interview and has been analyzed by descriptive statistical techniques and qualitative method. Results: Our findings show that engineering and deliberatly managing the process of inter- sub-cultural transactions in the national and global level can enable us to continually reform a peaceful set of learner sub-culturals toward recreate a peaceful unit global Home.Keywords: strategic engineering, mental image, six dimensional mental images strategy , cultural literacy, radar technique
Procedia PDF Downloads 40310400 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: data locality, data-centric computing, large scale infrastructures, cloud computing
Procedia PDF Downloads 25910399 Refugees’inclusion: The Psychological Screening and the Educational Tools in Portugal
Authors: Sandra Figueiredo
Abstract:
To guarantee the well-being and the academic achievement it is crucial into the global society to develop techniques to assess language competence and control psychological aspects on the second language learning context. The current scenario of the war conflicts that are emerging mostly in Europe and Middle East have been resulting in forced immigration and refugees’ maladjustment. The inclusion is the priority for United Nations concerning the sustainability of societies. For inclusion, psychological screening tests and educational tools are urgent. Method: Approximately 100 refugees from Ukraine were assessed, in Portugal, under the administration of the PCL-5. This 20-item instrument evaluates the Post-Traumatic Disorder. Expected results: The statistical analysis will be performed with the International Database Analyzer and SPSS (v. 28). The results expected are the relationship between traumatic events caused by war and post-traumatic symptomatology (anxiety, hypervigilance, stress). Implications: The data will be discussed concerning the problems of belonging, the psychological constraints and educational attainment (language needs included) experienced by the individuals more recently arrived to the hosting societies. The refugees’ acculturation process and the emotional regulation will be addressed.Keywords: refugees, immigration, educational needs, trauma, inclusion, second language.
Procedia PDF Downloads 5910398 Rapid Processing Techniques Applied to Sintered Nickel Battery Technologies for Utility Scale Applications
Authors: J. D. Marinaccio, I. Mabbett, C. Glover, D. Worsley
Abstract:
Through use of novel modern/rapid processing techniques such as screen printing and Near-Infrared (NIR) radiative curing, process time for the sintering of sintered nickel plaques, applicable to alkaline nickel battery chemistries, has been drastically reduced from in excess of 200 minutes with conventional convection methods to below 2 minutes using NIR curing methods. Steps have also been taken to remove the need for forming gas as a reducing agent by implementing carbon as an in-situ reducing agent, within the ink formulation.Keywords: batteries, energy, iron, nickel, storage
Procedia PDF Downloads 440