Search results for: method of initial functions
22298 Survival Chances and Costs after Heart Attacks: An Instrumental Variable Approach
Authors: Alice Sanwald, Thomas Schober
Abstract:
We analyze mortality and follow-up costs of heart attack patients using administrative data from Austria (2002-2011). As treatment intensity in a hospital largely depends on whether it has a catheterization laboratory, we focus on the effects of patients' initial admission to these specialized hospitals. To account for the nonrandom selection of patients into hospitals, we exploit individuals' place of residence as a source of exogenous variation in an instrumental variable framework. We find that the initial admission to specialized hospitals increases patients' survival chances substantially. The effect on 3-year mortality is -9.5 percentage points. A separation of the sample into subgroups shows the strongest effects in relative terms for patients below the age of 65. We do not find significant effects on longterm inpatient costs and find only marginal increases in outpatient costs.Keywords: acute myocardial infarction, mortality, costs, instrumental variables, heart attack
Procedia PDF Downloads 43622297 Topology Optimization of Heat Exchanger Manifolds for Aircraft
Authors: Hanjong Kim, Changwan Han, Seonghun Park
Abstract:
Heat exchanger manifolds in aircraft play an important role in evenly distributing the fluid entering through the inlet to the heat transfer unit. In order to achieve this requirement, the manifold should be designed to have a light weight by withstanding high internal pressure. Therefore, this study aims at minimizing the weight of the heat exchanger manifold through topology optimization. For topology optimization, the initial design space was created with the inner surface extracted from the currently used manifold model and with the outer surface having a dimension of 243.42 mm of X 74.09 mm X 65 mm. This design space solid model was transformed into a finite element model with a maximum tetrahedron mesh size of 2 mm using ANSYS Workbench. Then, topology optimization was performed under the boundary conditions of an internal pressure of 5.5 MPa and the fixed support for rectangular inlet boundaries by SIMULIA TOSCA. This topology optimization produced the minimized finial volume of the manifold (i.e., 7.3% of the initial volume) based on the given constraints (i.e., 6% of the initial volume) and the objective function (i.e., maximizing manifold stiffness). Weight of the optimized model was 6.7% lighter than the currently used manifold, but after smoothing the topology optimized model, this difference would be bigger. The current optimized model has uneven thickness and skeleton-shaped outer surface to reduce stress concentration. We are currently simplifying the optimized model shape with spline interpolations by reflecting the design characteristics in thickness and skeletal structures from the optimized model. This simplified model will be validated again by calculating both stress distributions and weight reduction and then the validated model will be manufactured using 3D printing processes.Keywords: topology optimization, manifold, heat exchanger, 3D printing
Procedia PDF Downloads 24822296 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 37322295 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment
Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska
Abstract:
Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.Keywords: accident assessment model, eye tracking, occupational safety, scaffolding
Procedia PDF Downloads 19922294 Waters Colloidal Phase Extraction and Preconcentration: Method Comparison
Authors: Emmanuelle Maria, Pierre Crançon, Gaëtane Lespes
Abstract:
Colloids are ubiquitous in the environment and are known to play a major role in enhancing the transport of trace elements, thus being an important vector for contaminants dispersion. Colloids study and characterization are necessary to improve our understanding of the fate of pollutants in the environment. However, in stream water and groundwater, colloids are often very poorly concentrated. It is therefore necessary to pre-concentrate colloids in order to get enough material for analysis, while preserving their initial structure. Many techniques are used to extract and/or pre-concentrate the colloidal phase from bulk aqueous phase, but yet there is neither reference method nor estimation of the impact of these different techniques on the colloids structure, as well as the bias introduced by the separation method. In the present work, we have tested and compared several methods of colloidal phase extraction/pre-concentration, and their impact on colloids properties, particularly their size distribution and their elementary composition. Ultrafiltration methods (frontal, tangential and centrifugal) have been considered since they are widely used for the extraction of colloids in natural waters. To compare these methods, a ‘synthetic groundwater’ was used as a reference. The size distribution (obtained by Field-Flow Fractionation (FFF)) and the chemical composition of the colloidal phase (obtained by Inductively Coupled Plasma Mass Spectrometry (ICPMS) and Total Organic Carbon analysis (TOC)) were chosen as comparison factors. In this way, it is possible to estimate the pre-concentration impact on the colloidal phase preservation. It appears that some of these methods preserve in a more efficient manner the colloidal phase composition while others are easier/faster to use. The choice of the extraction/pre-concentration method is therefore a compromise between efficiency (including speed and ease of use) and impact on the structural and chemical composition of the colloidal phase. In perspective, the use of these methods should enhance the consideration of colloidal phase in the transport of pollutants in environmental assessment studies and forensics.Keywords: chemical composition, colloids, extraction, preconcentration methods, size distribution
Procedia PDF Downloads 21522293 Initial Resistance Training Status Influences Upper Body Strength and Power Development
Authors: Stacey Herzog, Mitchell McCleary, Istvan Kovacs
Abstract:
Purpose: Maximal strength and maximal power are key athletic abilities in many sports disciplines. In recent years, velocity-based training (VBT) with a relatively high 75-85% 1RM resistance has been popularized in preparation for powerlifting and various other sports. The purpose of this study was to discover differences between beginner/intermediate and advanced lifters’ push/press performances after a heavy resistance-based BP training program. Methods: A six-week, three-workouts per week program was administered to 52 young, physically active adults (age: 22.4±5.1; 12 female). The majority of the participants (84.6%) had prior experience in bench pressing. Typical workouts began with BP using 75-95% 1RM in the 1-5 repetition range. The sets in the lower part of the range (75-80% 1RM) were performed with velocity-focus as well. The BP sets were followed by seated dumbbell presses and six additional upper-body assistance exercises. Pre- and post-tests were conducted on five test exercises: one-repetition maximum BP (1RM), calculated relative strength index: BP/BW (RSI), four-repetition maximal-effort dynamic BP for peak concentric velocity with 80% 1RM (4RV), 4-repetition ballistic pushups (BPU) for height (4PU), and seated medicine ball toss for distance (MBT). For analytic purposes, the participant group was divided into two subgroups: self-indicated beginner or intermediate initial resistance training status (BITS) [n=21, age: 21.9±3.6; 10 female] and advanced initial resistance training status (ATS) [n=31, age: 22.7±5.9; 2 female]. Pre- and post-test results were compared within subgroups. Results: Paired-sample t-tests indicated significant within-group improvements in all five test exercises in both groups (p < 0.05). BITS improved 18.1 lbs. (13.0%) in 1RM, 0.099 (12.8%) in RSI, 0.133 m/s (23.3%) in 4RV, 1.55 in. (27.1%) in BPU, and 1.00 ft. (5.8%) in MBT, while the ATS group improved 13.2 lbs. (5.7%) in 1RM, 0.071 (5.8%) in RSI, 0.051 m/s (9.1%) in 4RV, 1.20 in. (13.7%) in BPU, and 1.15 ft. (5.5%) in MBT. Conclusion: While the two training groups had different initial resistance training backgrounds, both showed significant improvements in all test exercises. As expected, the beginner/intermediate group displayed better relative improvements in four of the five test exercises. However, the medicine ball toss, which had the lightest resistance among the tests, showed similar relative improvements between the two groups. These findings relate to two important training principles: specificity and transfer. The ATS group had more specific experiences with heavy-resistance BP. Therefore, fewer improvements were detected in their test performances with heavy resistances. On the other hand, while the heavy resistance-based training transferred to increased power outcomes in light-resistance power exercises, the difference in the rate of improvement between the two groups disappeared. Practical applications: Based on initial training status, S&C coaches should expect different performance gains in maximal strength training-specific test exercises. However, the transfer from maximal strength to a non-training-specific performance category along the F-v curve continuum (i.e., light resistance and high velocity) might not depend on initial training status.Keywords: exercise, power, resistance training, strength
Procedia PDF Downloads 7022292 Electronic, Magnetic and Optic Properties in Halide Perovskites CsPbX3 (X= F, Cl, I)
Authors: B. Bouadjemi, S. Bentata, T. Lantri, Souidi Amel, W.Bensaali, A. Zitouni, Z. Aziz
Abstract:
We performed first-principle calculations, the full-potential linearized augmented plane wave (FP-LAPW) method is used to calculate structural, optoelectronic and magnetic properties of cubic halide perovskites CsPbX3 (X= F,I). We employed for this study the GGA approach and for exchange is modeled using the modified Becke-Johnson (mBJ) potential to predicting the accurate band gap of these materials. The optical properties (namely: the real and imaginary parts of dielectric functions, optical conductivities and absorption coefficient absorption make this halide perovskites promising materials for solar cells applications.Keywords: halide perovskites, mBJ, solar cells, FP-LAPW, optoelectronic properties, absorption coefficient
Procedia PDF Downloads 32222291 Radial Distribution Network Reliability Improvement by Using Imperialist Competitive Algorithm
Authors: Azim Khodadadi, Sahar Sadaat Vakili, Ebrahim Babaei
Abstract:
This study presents a numerical method to optimize the failure rate and repair time of a typical radial distribution system. Failure rate and repair time are effective parameters in customer and energy based indices of reliability. Decrease of these parameters improves reliability indices. Thus, system stability will be boost. The penalty functions indirectly reflect the cost of investment which spent to improve these indices. Constraints on customer and energy based indices, i.e. SAIFI, SAIDI, CAIDI and AENS have been considered by using a new method which reduces optimization algorithm controlling parameters. Imperialist Competitive Algorithm (ICA) used as main optimization technique and particle swarm optimization (PSO), simulated annealing (SA) and differential evolution (DE) has been applied for further investigation. These algorithms have been implemented on a test system by MATLAB. Obtained results have been compared with each other. The optimized values of repair time and failure rate are much lower than current values which this achievement reduced investment cost and also ICA gives better answer than the other used algorithms.Keywords: imperialist competitive algorithm, failure rate, repair time, radial distribution network
Procedia PDF Downloads 66822290 Relation between Sensory Processing Patterns and Working Memory in Autistic Children
Authors: Abbas Nesayan
Abstract:
Background: In recent years, autism has been under consideration in public and research area. Autistic children have dysfunction in communication, socialization, repetitive and stereotyped behaviors. In addition, they clinically suffer from difficulty in attention, challenge with familiar behaviors and sensory processing problems. Several variables are linked to sensory processing problems in autism, one of these variables is working memory. Working memory is part of the executive function which provides the necessary ability to completing multiple stages tasks. Method: This study has categorized in correlational research methods. After determining of entry criteria, according to purposive sampling method, 50 children were selected. Dunn’s sensory profile school companion was used for assessment of sensory processing patterns; behavioral rating inventory of executive functions was used (BRIEF) for assessment of working memory. Pearson correlation coefficient and linear regression were used for data analyzing. Results: The results showed the significant relationship between sensory processing patterns (low registration, sensory seeking, sensory sensitivity and sensory avoiding) with working memory in autistic children. Conclusion: According to the findings, there is the significant relationship between the patterns of sensory processing and working memory. So, in order to improve the working memory could be used some interventions based on the sensory processing.Keywords: sensory processing patterns, working memory, autism, autistic children
Procedia PDF Downloads 22322289 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images
Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy
Abstract:
Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms
Procedia PDF Downloads 38022288 Crushing Behaviour of Thin Tubes with Various Corrugated Sections Using Finite Element Modelling
Authors: Shagil Akhtar, Syed Muneeb Iqbal, Mohammed R. Rahim
Abstract:
Common steel tubes with similar confines were used in simulation of tubes with distinctive type of corrugated sections. These corrugated cross-sections were arc-tangent, triangular, trapezoidal and square corrugated sections. The outcome of fluctuating structures of tube cross-section shape on the deformation feedback, collapse form and energy absorption characteristics of tubes under quasi-static axial compression have been prepared numerically. The finite element package of ANSYS Workbench was applied in the current analysis. The axial load-displacement products accompanied by the fold formation of disparate tubes were inspected and compared. Deviation of the initial peak load and the mean crushing force of the tubes with distinctive cross-sections were conscientiously examined.Keywords: absorbed energy, axial loading, corrugated tubes, finite element, initial peak load, mean crushing force
Procedia PDF Downloads 38822287 Introduction to Techno-Sectoral Innovation System Modeling and Functions Formulating
Authors: S. M. Azad, H. Ghodsi Pour, F. Roshannafasa
Abstract:
In recent years ‘technology management and policymaking’ is one of the most important problems in management science. In this field, different generations of innovation and technology management are presented which the earliest one is Innovation System (IS) approach. In a general classification, innovation systems are divided in to 4 approaches: Technical, sectoral, regional, and national. There are many researches in relation to each of these approaches in different academic fields. Every approach has some benefits. If two or more approaches hybrid, their benefits would be combined. In addition, according to the sectoral structure of the governance model in Iran, in many sectors such as information technology, the combination of three other approaches with sectoral approach is essential. Hence, in this paper, combining two IS approaches (technical and sectoral) and using system dynamics, a generic model is presented for a sample of software industry. As a complimentary point, this article is introducing a new hybrid approach called Techno-Sectoral Innovation System. This TSIS model is accomplished by Changing concepts of the ‘functions’ which came from Technological IS literature and using them into sectoral system as measurable indicators.Keywords: innovation system, technology, techno-sectoral system, functional indicators, system dynamics
Procedia PDF Downloads 43922286 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 18122285 Realization of Soliton Phase Characteristics in 10 Gbps, Single Channel, Uncompensated Telecommunication System
Authors: A. Jawahar
Abstract:
In this paper, the dependence of soliton pulses with respect to phase in a 10 Gbps, single channel, dispersion uncompensated telecommunication system was studied. The characteristic feature of periodic soliton interaction was noted at the Interaction point (I=6202.5Km) in one collision length of L=12405.1 Km. The interaction point is located for 10Gbps system with an initial relative spacing (qo) of soliton as 5.28 using Perturbation theory. It is shown that, when two in-phase solitons are launched, they interact at the point I=6202.5 Km, but the interaction could be restricted with introduction of different phase initially. When the phase of the input solitons increases, the deviation of soliton pulses at the I also increases. We have successfully demonstrated this effect in a telecommunication set-up in terms of Quality factor (Q), where the Q=0 for in-phase soliton. The Q was noted to be 125.9, 38.63, 47.53, 59.60, 161.37, and 78.04 for different phases such as 10o, 20o, 30o, 45o, 60o and 90o degrees respectively at Interaction point I.Keywords: Soliton interaction, Initial relative spacing, phase, Perturbation theory and telecommunication system
Procedia PDF Downloads 47222284 Evaluation of Firearm Injury Syndromic Surveillance in Utah
Authors: E. Bennion, A. Acharya, S. Barnes, D. Ferrell, S. Luckett-Cole, G. Mower, J. Nelson, Y. Nguyen
Abstract:
Objective: This study aimed to evaluate the validity of a firearm injury query in the Early Notification of Community-based Epidemics syndromic surveillance system. Syndromic surveillance data are used at the Utah Department of Health for early detection of and rapid response to unusually high rates of violence and injury, among other health outcomes. The query of interest was defined by the Centers for Disease Control and Prevention and used chief complaint and discharge diagnosis codes to capture initial emergency department encounters for firearm injury of all intents. Design: Two epidemiologists manually reviewed electronic health records of emergency department visits captured by the query from April-May 2020, compared results, and sent conflicting determinations to two arbiters. Results: Of the 85 unique records captured, 67 were deemed probable, 19 were ruled out, and two were undetermined, resulting in a positive predictive value of 75.3%. Common reasons for false positives included non-initial encounters and misleading keywords. Conclusion: Improving the validity of syndromic surveillance data would better inform outbreak response decisions made by state and local health departments. The firearm injury definition could be refined to exclude non-initial encounters by negating words such as “last month,” “last week,” and “aftercare”; and to exclude non-firearm injury by negating words such as “pellet gun,” “air gun,” “nail gun,” “bullet bike,” and “exit wound” when a firearm is not mentioned.Keywords: evaluation, health information system, firearm injury, syndromic surveillance
Procedia PDF Downloads 16622283 Analysis of Vibratory Signals Based on Local Mean Decomposition (LMD) for Rolling Bearing Fault Diagnosis
Authors: Toufik Bensana, Medkour Mihoub, Slimane Mekhilef
Abstract:
The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally nonstationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA), and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. The results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.Keywords: fault diagnosis, rolling element bearing, local mean decomposition, condition monitoring
Procedia PDF Downloads 38922282 A Parallel Implementation of Artificial Bee Colony Algorithm within CUDA Architecture
Authors: Selcuk Aslan, Dervis Karaboga, Celal Ozturk
Abstract:
Artificial Bee Colony (ABC) algorithm is one of the most successful swarm intelligence based metaheuristics. It has been applied to a number of constrained or unconstrained numerical and combinatorial optimization problems. In this paper, we presented a parallelized version of ABC algorithm by adapting employed and onlooker bee phases to the Compute Unified Device Architecture (CUDA) platform which is a graphical processing unit (GPU) programming environment by NVIDIA. The execution speed and obtained results of the proposed approach and sequential version of ABC algorithm are compared on functions that are typically used as benchmarks for optimization algorithms. Tests on standard benchmark functions with different colony size and number of parameters showed that proposed parallelization approach for ABC algorithm decreases the execution time consumed by the employed and onlooker bee phases in total and achieved similar or better quality of the results compared to the standard sequential implementation of the ABC algorithm.Keywords: Artificial Bee Colony algorithm, GPU computing, swarm intelligence, parallelization
Procedia PDF Downloads 37822281 An Efficient Approach to Optimize the Cost and Profit of a Tea Garden by Using Branch and Bound Method
Authors: Abu Hashan Md Mashud, M. Sharif Uddin, Aminur Rahman Khan
Abstract:
In this paper, we formulate a new problem as a linear programming and Integer Programming problem and maximize profit within the limited budget and limited resources based on the construction of a tea garden problem. It describes a new idea about how to optimize profit and focuses on the practical aspects of modeling and the challenges of providing a solution to a complex real life problem. Finally, a comparative study is carried out among Graphical method, Simplex method and Branch and bound method.Keywords: integer programming, tea garden, graphical method, simplex method, branch and bound method
Procedia PDF Downloads 62322280 An Ab Initio Molecular Orbital Theory and Density Functional Theory Study of Fluorous 1,3-Dion Compounds
Authors: S. Ghammamy, M. Mirzaabdollahiha
Abstract:
Quantum mechanical calculations of energies, geometries, and vibrational wavenumbers of fluorous 1,3-dion compounds are carried out using density functional theory (DFT/B3LYP) method with LANL2DZ basis sets. The calculated HOMO and LUMO energies show that charge transfer occurs in the molecules. The thermodynamic functions of fluorous 1,3-dion compounds have been performed at B3LYP/LANL2DZ basis sets. The theoretical spectrograms for F NMR spectra of fluorous 1,3-dion compounds have also been constructed. The F NMR nuclear shieldings of fluoride ligands in fluorous 1,3-dion compounds have been studied quantum chemical.Keywords: density function theory, natural bond orbital, HOMO, LOMO, fluorous
Procedia PDF Downloads 38922279 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 13522278 Prediction of Coronary Heart Disease Using Fuzzy Logic
Authors: Elda Maraj, Shkelqim Kuka
Abstract:
Coronary heart disease causes many deaths in the world. Unfortunately, this problem will continue to increase in the future. In this paper, a fuzzy logic model to predict coronary heart disease is presented. This model has been developed with seven input variables and one output variable that was implemented for 30 patients in Albania. Here fuzzy logic toolbox of MATLAB is used. Fuzzy model inputs are considered as cholesterol, blood pressure, physical activity, age, BMI, smoking, and diabetes, whereas the output is the disease classification. The fuzzy sets and membership functions are chosen in an appropriate manner. Centroid method is used for defuzzification. The database is taken from University Hospital Center "Mother Teresa" in Tirana, Albania.Keywords: coronary heart disease, fuzzy logic toolbox, membership function, prediction model
Procedia PDF Downloads 16122277 Power Generating Embedment beneath Vehicle Traffic Asphalt Roads
Authors: Ahmed Khalil
Abstract:
The discoveries in material sciences create an impulse in renewable energy transmission. Application techniques become more accessible by applied sciences. Variety of materials, application methods, and performance analyzing techniques can convert daily life functions to energy sources. These functions not only include natural sources like sun, wind, or water but also comprise the motion of tools used by human beings. In line with this, vehicles' motion, speed and weights come to the scene as energy sources together with piezoelectric nano-generators beneath the roads. Numerous application examples are put forward with repeated average performance, versus the differentiating challenges depending on geography and project conditions. Such holistic approach provides way for feed backs on research and improvement process of nano-generators beneath asphalt roads. This paper introduces the specific application methods of piezoelectric nano-generator beneath asphalt roads of Ahmadi Township in Kuwait.Keywords: nano-generator pavements, piezoelectric, renewable energy, transducer
Procedia PDF Downloads 11522276 A Case for Introducing Thermal-Design Optimisation Using Excel Spreadsheet
Authors: M. M. El-Awad
Abstract:
This paper deals with the introduction of thermal-design optimisation to engineering students by using Microsoft's Excel as a modelling platform. Thermal-design optimisation is an iterative process which involves the evaluation of many thermo-physical properties that vary with temperature and/or pressure. Therefore, suitable modelling software, such as Engineering Equation Solver (EES) or Interactive Thermodynamics (IT), is usually used for this purpose. However, such proprietary applications may not be available to many educational institutions in developing countries. This paper presents a simple thermal-design case that demonstrates how the principles of thermo-fluids and economics can be jointly applied so as to find an optimum solution to a thermal-design problem. The paper describes the solution steps and provides all the equations needed to solve the case with Microsoft Excel. The paper also highlights the advantage of using VBA (Visual Basic for Applications) for developing user-defined functions when repetitive or complex calculations are met. VBA makes Excel a powerful, yet affordable, the computational platform for introducing various engineering principles.Keywords: engineering education, thermal design, Excel, VBA, user-defined functions
Procedia PDF Downloads 37522275 Competitor Integration with Voice of Customer Ratings in QFD Studies Using Geometric Mean Based on AHP
Authors: Zafar Iqbal, Nigel P. Grigg, K. Govindaraju, Nicola M. Campbell-Allen
Abstract:
Quality Function Deployment (QFD) is structured approach. It has been used to improve the quality of products and process in a wide range of fields. Using this systematic tool, practitioners normally rank Voice of Customer ratings (VoCs) in order to produce Improvement Ratios (IRs) which become the basis for prioritising process / product design or improvement activities. In one matrix of the House of Quality (HOQ) competitors are rated. The method of obtaining improvement ratios (IRs) does not always integrate the competitors’ rating in a systematic way that fully utilises competitor rating information. This can have the effect of diverting QFD practitioners’ attention from a potentially important VOC to less important VOC. In order to enhance QFD analysis, we present a more systematic method for integrating competitor ratings, utilising the geometric mean of the customer rating matrix. In this paper we develop a new approach, based on the Analytic Hierarchy Process (AHP), in which we generating a matrix of multiple comparisons of all competitors, and derive a geometric mean for each competitor. For each VOC an improved IR is derived which-we argue herein - enhances the initial VOC importance ratings by integrating more information about competitor performance. In this way, our method can help overcome one of the possible shortcomings of QFD. We then use a published QFD example from literature as a case study to demonstrate the use of the new AHP-based IRs, and show how these can be used to re-rank existing VOCs to -arguably- better achieve the goal of customer satisfaction in relation VOC ratings and competitors’ rankings. We demonstrate how two dimensional AHP-based geometric mean derived from the multiple competitor comparisons matrix can be useful for analysing competitors’ rankings. Our method utilises an established methodology (AHP) applied within an established application (QFD), but in an original way (through the competitor analysis matrix), to achieve a novel improvement.Keywords: quality function deployment, geometric mean, improvement ratio, AHP, competitors ratings
Procedia PDF Downloads 36622274 SVM-Based Modeling of Mass Transfer Potential of Multiple Plunging Jets
Authors: Surinder Deswal, Mahesh Pal
Abstract:
The paper investigates the potential of support vector machines based regression approach to model the mass transfer capacity of multiple plunging jets, both vertical (θ = 90°) and inclined (θ = 60°). The data set used in this study consists of four input parameters with a total of eighty eight cases. For testing, tenfold cross validation was used. Correlation coefficient values of 0.971 and 0.981 (root mean square error values of 0.0025 and 0.0020) were achieved by using polynomial and radial basis kernel functions based support vector regression respectively. Results suggest an improved performance by radial basis function in comparison to polynomial kernel based support vector machines. The estimated overall mass transfer coefficient, by both the kernel functions, is in good agreement with actual experimental values (within a scatter of ±15 %); thereby suggesting the utility of support vector machines based regression approach.Keywords: mass transfer, multiple plunging jets, support vector machines, ecological sciences
Procedia PDF Downloads 46422273 Improving the Performances of the nMPRA Architecture by Implementing Specific Functions in Hardware
Authors: Ionel Zagan, Vasile Gheorghita Gaitan
Abstract:
Minimizing the response time to asynchronous events in a real-time system is an important factor in increasing the speed of response and an interesting concept in designing equipment fast enough for the most demanding applications. The present article will present the results regarding the validation of the nMPRA (Multi Pipeline Register Architecture) architecture using the FPGA Virtex-7 circuit. The nMPRA concept is a hardware processor with the scheduler implemented at the processor level; this is done without affecting a possible bus communication, as is the case with the other CPU solutions. The implementation of static or dynamic scheduling operations in hardware and the improvement of handling interrupts and events by the real-time executive described in the present article represent a key solution for eliminating the overhead of the operating system functions. The nMPRA processor is capable of executing a preemptive scheduling, using various algorithms without a software scheduler. Therefore, we have also presented various scheduling methods and algorithms used in scheduling the real-time tasks.Keywords: nMPRA architecture, pipeline processor, preemptive scheduling, real-time system
Procedia PDF Downloads 36822272 Teachers’ Instructional Decisions When Teaching Geometric Transformations
Authors: Lisa Kasmer
Abstract:
Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics
Procedia PDF Downloads 8822271 Dual-Rail Logic Unit in Double Pass Transistor Logic
Authors: Hamdi Belgacem, Fradi Aymen
Abstract:
In this paper we present a low power, low cost differential logic unit (LU). The proposed LU receives dual-rail inputs and generates dual-rail outputs. The proposed circuit can be used in Arithmetic and Logic Units (ALU) of processor. It can be also dedicated for self-checking applications based on dual duplication code. Four logic functions as well as their inverses are implemented within a single Logic Unit. The hardware overhead for the implementation of the proposed LU is lower than the hardware overhead required for standard LU implemented with standard CMOS logic style. This new implementation is attractive as fewer transistors are required to implement important logic functions. The proposed differential logic unit can perform 8 Boolean logical operations by using only 16 transistors. Spice simulations using a 32 nm technology was utilized to evaluate the performance of the proposed circuit and to prove its acceptable electrical behaviour.Keywords: differential logic unit, double pass transistor logic, low power CMOS design, low cost CMOS design
Procedia PDF Downloads 45222270 Alternative General Formula to Estimate and Test Influences of Early Diagnosis on Cancer Survival
Authors: Li Yin, Xiaoqin Wang
Abstract:
Background and purpose: Cancer diagnosis is part of a complex stochastic process, in which patients' personal and social characteristics influence the choice of diagnosing methods, diagnosing methods, in turn, influence the initial assessment of cancer stage, the initial assessment, in turn, influences the choice of treating methods, and treating methods in turn influence cancer outcomes such as cancer survival. To evaluate diagnosing methods, one needs to estimate and test the causal effect of a regime of cancer diagnosis and treatments. Recently, Wang and Yin (Annals of statistics, 2020) derived a new general formula, which expresses these causal effects in terms of the point effects of treatments in single-point causal inference. As a result, it is possible to estimate and test these causal effects via point effects. The purpose of the work is to estimate and test causal effects under various regimes of cancer diagnosis and treatments via point effects. Challenges and solutions: The cancer stage has influences from earlier diagnosis as well as on subsequent treatments. As a consequence, it is highly difficult to estimate and test the causal effects via standard parameters, that is, the conditional survival given all stationary covariates, diagnosing methods, cancer stage and prognosis factors, treating methods. Instead of standard parameters, we use the point effects of cancer diagnosis and treatments to estimate and test causal effects under various regimes of cancer diagnosis and treatments. We are able to use familiar methods in the framework of single-point causal inference to accomplish the task. Achievements: we have applied this method to stomach cancer survival from a clinical study in Sweden. We have studied causal effects under various regimes, including the optimal regime of diagnosis and treatments and the effect moderation of the causal effect by age and gender.Keywords: cancer diagnosis, causal effect, point effect, G-formula, sequential causal effect
Procedia PDF Downloads 19522269 Magnetic Activated Carbon: Preparation, Characterization, and Application for Vanadium Removal
Authors: Hakimeh Sharififard, Mansooreh Soleimani
Abstract:
In this work, the magnetic activated carbon nanocomposite (Fe-CAC) has been synthesized by anchorage iron hydr(oxide) nanoparticles onto commercial activated carbon (CAC) surface and characterized using BET, XRF, SEM techniques. The influence of various removal parameters such as pH, contact time and initial concentration of vanadium on vanadium removal was evaluated using CAC and Fe-CAC in batch method. The sorption isotherms were studied using Langmuir, Freundlich and Dubinin–Radushkevich (D–R) isotherm models. These equilibrium data were well described by the Freundlich model. Results showed that CAC had the vanadium adsorption capacity of 37.87 mg/g, while the Fe-AC was able to adsorb 119.01 mg/g of vanadium. Kinetic data was found to confirm pseudo-second-order kinetic model for both adsorbents.Keywords: magnetic activated carbon, remove, vanadium, nanocomposite, freundlich
Procedia PDF Downloads 463