Search results for: optimization methods
16000 Reliability and Validity of Determining Ventilatory Threshold and Respiratory Compensation Point by Near-Infrared Spectroscopy
Authors: Tso-Yen Mao, De-Yen Liu, Chun-Feng Huang
Abstract:
Purpose: This research intends to investigate the reliability and validity of ventilatory threshold (VT) and respiratory compensation point (RCP) determined by skeletal muscle hemodynamic status. Methods: One hundred healthy male (age: 22±3 yrs; height: 173.1±6.0 cm; weight: 67.1±10.5 kg) performed graded cycling exercise test which ventilatory and skeletal muscle hemodynamic data were collected simultaneously. VT and RCP were determined by combined V-slope (VE vs. VCO2) and ventilatory efficiency (VE/VO2 vs. VE/VCO2) methods. Pearson correlation, paired t-test, and Bland-Altman plots were used to analyze reliability, validity, and similarities. Statistical significance was set at α =. 05. Results: There are high test-retest correlations of VT and RCP in ventilatory or near-infrared spectroscopy (NIRS) methods (VT vs. VTNIRS: 0.95 vs. 0.94; RCP vs. RCPNIRS: 0.93 vs. 0.93, p<. 05). There are high coefficient of determination at the first timing point of O2Hb decreased (R2 = 0.88, p<. 05) with VT, and high coefficient of determination at the second timing point of O2Hb declined (R2 = 0.89, p< .05) with RCP. VO2 of VT and RCP are not significantly different between ventilatory and NIRS methods (p>. 05). Conclusion: Using NIRS method to determine VT and RCP is reliable and valid in male individuals during graded exercise. Non-invasive skeletal muscle hemodynamics monitor also can be used for controlling training intensity in the future.Keywords: anaerobic threshold, exercise intensity, hemodynamic, NIRS
Procedia PDF Downloads 31315999 A Comparative Study on ANN, ANFIS and SVM Methods for Computing Resonant Frequency of A-Shaped Compact Microstrip Antennas
Authors: Ahmet Kayabasi, Ali Akdagli
Abstract:
In this study, three robust predicting methods, namely artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for computing the resonant frequency of A-shaped compact microstrip antennas (ACMAs) operating at UHF band. Firstly, the resonant frequencies of 144 ACMAs with various dimensions and electrical parameters were simulated with the help of IE3D™ based on method of moment (MoM). The ANN, ANFIS and SVM models for computing the resonant frequency were then built by considering the simulation data. 124 simulated ACMAs were utilized for training and the remaining 20 ACMAs were used for testing the ANN, ANFIS and SVM models. The performance of the ANN, ANFIS and SVM models are compared in the training and test process. The average percentage errors (APE) regarding the computed resonant frequencies for training of the ANN, ANFIS and SVM were obtained as 0.457%, 0.399% and 0.600%, respectively. The constructed models were then tested and APE values as 0.601% for ANN, 0.744% for ANFIS and 0.623% for SVM were achieved. The results obtained here show that ANN, ANFIS and SVM methods can be successfully applied to compute the resonant frequency of ACMAs, since they are useful and versatile methods that yield accurate results.Keywords: a-shaped compact microstrip antenna, artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS), support vector machine (SVM)
Procedia PDF Downloads 44115998 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 12715997 Theoretical and Experimental Electrostatic Potential around the M-Nitrophenol Compound
Authors: Drissi Mokhtaria, Chouaih Abdelkader, Fodil Hamzaoui
Abstract:
Our work is about a comparison of experimental and theoretical results of the electron charge density distribution and the electrostatic potential around the M-Nitrophenol Molecule (m-NPH) kwon for its interesting physical characteristics. The molecular experimental results have been obtained from a high-resolution X-ray diffraction study. Theoretical investigations were performed under the Gaussian program using the Density Functional Theory at B3LYP level of theory at 6-31G*. The multipolar model of Hansen and Coppens was used for the experimental electron charge density distribution around the molecule, while we used the DFT methods for the theoretical calculations. The electron charge density obtained in both methods allowed us to find out the different molecular properties such us the electrostatic potential and the dipole moment which were finally subject to a comparison leading to an outcome of a good matching results obtained in both methods.Keywords: electron charge density, m-nitrophenol, nonlinear optical compound, electrostatic potential, optimized geometric
Procedia PDF Downloads 26815996 Choral Singers' Preference for Expressive Priming Techniques
Authors: Shawn Michael Condon
Abstract:
Current research on teaching expressivity mainly involves instrumentalists. This study focuses on choral singers’ preference of priming techniques based on four methods for teaching expressivity. 112 choral singers answered the survey about their preferred methods for priming expressivity (vocal modelling, using metaphor, tapping into felt emotions, and drawing on past experiences) in three conditions (active, passive, and instructor). Analysis revealed higher preference for drawing on past experience among more experienced singers. The most preferred technique in the passive and instructor roles was vocal modelling, with metaphors and tapping into felt emotions favoured in an active role. Priming techniques are often used in combination with other methods to enhance singing technique or expressivity and are dependent upon the situation, repertoire, and the preferences of the instructor and performer.Keywords: emotion, expressivity, performance, singing, teaching
Procedia PDF Downloads 15515995 Physical, Microstructural and Functional Quality Improvements of Cassava-Sorghum Composite Snacks
Authors: Adil Basuki Ahza, Michael Liong, Subarna Suryatman
Abstract:
Healthy chips now dominating the snack market shelves. More than 80% processed snack foods in the market are chips. This research takes the advantages of twin extrusion technology to produce two types of product, i.e. directly expanded and intermediate ready-to-fry or microwavable chips. To improve the functional quality, the cereal-tuber based mix was enriched with antioxidant rich mix of temurui, celery, carrot and isolated soy protein (ISP) powder. Objectives of this research were to find best composite cassava-sorghum ratio, i.e. 60:40, 70:30 and 80:20, to optimize processing conditions of extrusion and study the microstructural, physical and sensorial characteristics of the final products. Optimization was firstly done by applying metering section of extruder barrel temperatures of 120, 130 and 140 °C with screw speeds of 150, 160 and 170 rpm to produce direct expanded product. The intermediate product was extruded in 100 °C and 100 rpm screw speed with feed moisture content of 35, 40 and 45%. The directly expanded products were analyzed for color, hardness, density, microstructure, and organoleptic properties. The results showed that interaction of ratio of cassava-sorghum and cooking methods affected the product's color, hardness, and bulk density (p<0.05). Extrusion processing conditions also significantly affected product's microstructure (p<0.05). The direct expanded snacks of 80:20 cassava-sorghum ratio and fried expanded one 70:30 and 80:20 ratio shown the best organoleptic score (slightly liked) while baking the intermediate product with microwave were resulted sensorial not acceptable quality chips.Keywords: cassava-sorghum composite, extrusion, microstructure, physical characteristics
Procedia PDF Downloads 28215994 Optimal Construction Using Multi-Criteria Decision-Making Methods
Authors: Masood Karamoozian, Zhang Hong
Abstract:
The necessity and complexity of the decision-making process and the interference of the various factors to make decisions and consider all the relevant factors in a problem are very obvious nowadays. Hence, researchers show their interest in multi-criteria decision-making methods. In this research, the Analytical Hierarchy Process (AHP), Simple Additive Weighting (SAW), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods of multi-criteria decision-making have been used to solve the problem of optimal construction systems. Systems being evaluated in this problem include; Light Steel Frames (LSF), a case study of designs by Zhang Hong studio in the Southeast University of Nanjing, Insulating Concrete Form (ICF), Ordinary Construction System (OCS), and Prefabricated Concrete System (PRCS) as another case study designs in Zhang Hong studio in the Southeast University of Nanjing. Crowdsourcing was done by using a questionnaire at the sample level (200 people). Questionnaires were distributed among experts, university centers, and conferences. According to the results of the research, the use of different methods of decision-making led to relatively the same results. In this way, with the use of all three multi-criteria decision-making methods mentioned above, the Prefabricated Concrete System (PRCS) was in the first rank, and the Light Steel Frame (LSF) system ranked second. Also, the Prefabricated Concrete System (PRCS), in terms of performance standards and economics, was ranked first, and the Light Steel Frame (LSF) system was allocated the first rank in terms of environmental standards.Keywords: multi-criteria decision making, AHP, SAW, TOPSIS
Procedia PDF Downloads 11015993 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation
Authors: H. Khanfari, M. Johari Fard
Abstract:
Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)
Procedia PDF Downloads 22015992 Motivational Orientation of the Methodical System of Teaching Mathematics in Secondary Schools
Authors: M. Rodionov, Z. Dedovets
Abstract:
The article analyses the composition and structure of the motivationally oriented methodological system of teaching mathematics (purpose, content, methods, forms, and means of teaching), viewed through the prism of the student as the subject of the learning process. Particular attention is paid to the problem of methods of teaching mathematics, which are represented in the form of an ordered triad of attributes corresponding to the selected characteristics. A systematic analysis of possible options and their methodological interpretation enriched existing ideas about known methods and technologies of training, and significantly expanded their nomenclature by including previously unstudied combinations of characteristics. In addition, examples outlined in this article illustrate the possibilities of enhancing the motivational capacity of a particular method or technology in the real learning practice of teaching mathematics through more free goal-setting and varying the conditions of the problem situations. The authors recommend the implementation of different strategies according to their characteristics in teaching and learning mathematics in secondary schools.Keywords: education, methodological system, the teaching of mathematics, students motivation
Procedia PDF Downloads 35415991 Translation Training in the AI Era
Authors: Min Gao
Abstract:
In the past year, the advent of large language models (LLMs) has brought about a revolution in the language service industry, making it possible to efficiently produce more satisfactory and higher-quality translations. This is groundbreaking news for commercial companies involved in language services since much of a translator's work can now be completed by machines. However, it may be bad news for universities that provide translation training programs. They need to confront the challenges posed by AI in education by reconsidering issues such as the reform of traditional teaching methods, the translation ethics of students, and the new demands of the job market for their graduates. This article is an exploratory study of these issues based on the author's experiences in translation teaching. The research combines methods in the form of questionnaires and interviews. The findings include: (1) students may lose their motivation to learn in the AI era, but this can be compensated for by encouragement from the lecturer; (2) Translation ethics are not a serious problem in schools, considering the strict policies and regulations in place; (3) The role of translators has evolved in the new era, necessitating a reform of the traditional teaching methods.Keywords: job market of translation, large language model, translation ethics, translation training
Procedia PDF Downloads 6815990 Normal Weight Obesity among Female Students: BMI as a Non-Sufficient Tool for Obesity Assessment
Authors: Krzysztof Plesiewicz, Izabela Plesiewicz, Krzysztof Chiżyński, Marzenna Zielińska
Abstract:
Background: Obesity is an independent risk factor for cardiovascular diseases. There are several anthropometric parameters proposed to estimate the level of obesity, but until now there is no agreement which one is the best predictor of cardiometabolic risk. Scientists defined metabolically obese normal weight, who suffer from metabolic abnormalities, the same as obese individuals, and defined this syndrome as normal weight obesity (NWO). Aim of the study: The aim of our study was to determine the occurrence of overweight and obesity in a cohort of young, adult women, using standard and complementary methods of obesity assessment and to indicate those, who are at risk of obesity. The second aim of our study was to test additional methods of obesity assessment and proof that body mass index using alone is not sufficient parameter of obesity assessment. Materials and methods: 384 young women, aged 18-32, were enrolled into the study. Standard anthropometric parameters (waist to hips ratio (WTH), waist to height ratio (WTHR)) and two other methods of body fat percentage measurement (BFPM) were used in the study: electrical bioimpendance analysis (BIA) and skinfold measurement test by digital fat body mass clipper (SFM). Results: In the study group 5% and 7% of participants had waist to hips ratio and accordingly waist to height ratio values connected with visceral obesity. According to BMI 14% participants were overweight and obese. Using additional methods of body fat assessment, there were 54% and 43% of obese for BIA and SMF method. In the group of participants with normal BMI and underweight (not overweight, n =340) there were individuals with the level of BFPM above the upper limit, for the BIA 49% (n =164) and for the SFM 36 % (n=125). Statistical analysis revealed strong correlation between BIA and SFM methods. Conclusion: BMI using alone is not a sufficient parameter of obesity assessment. High percentage of young women with normal BMI values seem to be normal weight obese.Keywords: electrical bioimpedance, normal weight obesity, skin-fold measurement test, women
Procedia PDF Downloads 27415989 Effective Planning of Public Transportation Systems: A Decision Support Application
Authors: Ferdi Sönmez, Nihal Yorulmaz
Abstract:
Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.Keywords: operator cost, bi-level optimization model, user cost, urban transportation
Procedia PDF Downloads 24615988 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach
Authors: Zhuoran Li, Guan Qin
Abstract:
A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method
Procedia PDF Downloads 17215987 Research of Database Curriculum Construction under the Environment of Massive Open Online Courses
Authors: Wang Zhanquan, Yang Zeping, Gu Chunhua, Zhu Fazhi, Guo Weibin
Abstract:
Recently, Massive Open Online Courses (MOOCs) are becoming the new trend of education. There are many problems under the environment of Database Principle curriculum teaching process in MOOCs, such as teaching ideas and theories which are out of touch with the reality, how to carry out the technical teaching and interactive practice in the MOOCs environment, thus the methods of database course under the environment of MOOCs are proposed. There are three processes to deal with problem solving in the research, which are problems proposed, problems solved, and inductive analysis. The present research includes the design of teaching contents, teaching methods in classroom, flipped classroom teaching mode under the environment of MOOCs, learning flow method and large practice homework. The database designing ability is systematically improved based on the researching methods.Keywords: problem solving-driven, MOOCs, teaching art, learning flow;
Procedia PDF Downloads 36315986 A Universal Approach to Categorize Failures in Production
Authors: Konja Knüppel, Gerrit Meyer, Peter Nyhuis
Abstract:
The increasing interconnectedness and complexity of production processes raise the susceptibility of production systems to failure. Therefore, the ability to respond quickly to failures is increasingly becoming a competitive factor. The research project "Sustainable failure management in manufacturing SMEs" is developing a methodology to identify failures in the production and select preventive and reactive measures in order to correct failures and to establish sustainable failure management systems.Keywords: failure categorization, failure management, logistic performance, production optimization
Procedia PDF Downloads 37415985 Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features
Authors: Hrishi Rakshit, Pooneh Bagheri Zadeh
Abstract:
This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods.Keywords: drone classifications, deep convolutional neural network, hyperparameters, drone audio signal
Procedia PDF Downloads 10415984 Unsupervised Domain Adaptive Text Retrieval with Query Generation
Authors: Rui Yin, Haojie Wang, Xun Li
Abstract:
Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.Keywords: dense retrieval, query generation, unsupervised training, text retrieval
Procedia PDF Downloads 7315983 Comprehensive Analysis and Optimization of Alkaline Water Electrolysis for Green Hydrogen Production: Experimental Validation, Simulation Study, and Cost Analysis
Authors: Umair Ahmed, Muhammad Bin Irfan
Abstract:
This study focuses on designing and optimization of an alkaline water electrolyser for the production of green hydrogen. The aim is to enhance the durability and efficiency of this technology while simultaneously reducing the cost associated with the production of green hydrogen. The experimental results obtained from the alkaline water electrolyser are compared with simulated results using Aspen Plus software, allowing a comprehensive analysis and evaluation. To achieve the aforementioned goals, several design and operational parameters are investigated. The electrode material, electrolyte concentration, and operating conditions are carefully selected to maximize the efficiency and durability of the electrolyser. Additionally, cost-effective materials and manufacturing techniques are explored to decrease the overall production cost of green hydrogen. The experimental setup includes a carefully designed alkaline water electrolyser, where various performance parameters (such as hydrogen production rate, current density, and voltage) are measured. These experimental results are then compared with simulated data obtained using Aspen Plus software. The simulation model is developed based on fundamental principles and validated against the experimental data. The comparison between experimental and simulated results provides valuable insight into the performance of an alkaline water electrolyser. It helps to identify the areas where improvements can be made, both in terms of design and operation, to enhance the durability and efficiency of the system. Furthermore, the simulation results allow cost analysis providing an estimate of the overall production cost of green hydrogen. This study aims to develop a comprehensive understanding of alkaline water electrolysis technology. The findings of this research can contribute to the development of more efficient and durable electrolyser technology while reducing the cost associated with this technology. Ultimately, these advancements can pave the way for a more sustainable and economically viable hydrogen economy.Keywords: sustainable development, green energy, green hydrogen, electrolysis technology
Procedia PDF Downloads 9015982 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 32115981 Solving Linear Systems Involved in Convex Programming Problems
Authors: Yixun Shi
Abstract:
Many interior point methods for convex programming solve an (n+m)x(n+m)linear system in each iteration. Many implementations solve this system in each iteration by considering an equivalent mXm system (4) as listed in the paper, and thus the job is reduced into solving the system (4). However, the system(4) has to be solved exactly since otherwise the error would be entirely passed onto the last m equations of the original system. Often the Cholesky factorization is computed to obtain the exact solution of (4). One Cholesky factorization is to be done in every iteration, resulting in higher computational costs. In this paper, two iterative methods for solving linear systems using vector division are combined together and embedded into interior point methods. Instead of computing one Cholesky factorization in each iteration, it requires only one Cholesky factorization in the entire procedure, thus significantly reduces the amount of computation needed for solving the problem. Based on that, a hybrid algorithm for solving convex programming problems is proposed.Keywords: convex programming, interior point method, linear systems, vector division
Procedia PDF Downloads 40215980 Bi-Dimensional Spectral Basis
Authors: Abdelhamid Zerroug, Mlle Ismahene Sehili
Abstract:
Spectral methods are usually applied to solve uni-dimensional boundary value problems. With the advantage of the creation of multidimensional basis, we propose a new spectral method for bi-dimensional problems. In this article, we start by creating bi-spectral basis by different ways, we developed also a new relations to determine the expressions of spectral coefficients in different partial derivatives expansions. Finally, we propose the principle of a new bi-spectral method for the bi-dimensional problems.Keywords: boundary value problems, bi-spectral methods, bi-dimensional Legendre basis, spectral method
Procedia PDF Downloads 39515979 Fish Is Back but Fishers Are Out: The Dilemma of the Education Methods Adapted for Co-management of the Fishery Resource
Authors: Namubiru Zula, Janice Desire Busingue
Abstract:
Pro-active educational approaches have lately been adapted Globally in the Conservation of Natural Resources. This led to the introduction of the co-management system, which worked for some European Countries on the conservation of sharks and other Natural resources. However, this approach has drastically failed in the Fishery sector on Lake Victoria; and the punitive education approach has been re-instated. Literature is readily available about the punitive educational approaches and scanty with the pro-active one. This article analyses the pro-active approach adopted by the Department of Fisheries for the orientation of BMU leaders in a co-management system. The study is interpreted using the social constructivist lens for co-management of the fishery resource to ensure that fishers are also back to fishing sustainably. It highlights some of the education methods used, methodological challenges that included the power and skills gap of the facilitators and program designers, and some implications to practice.Keywords: beach management units, fishers, education methods, proactive approach, punitive approach
Procedia PDF Downloads 12315978 Study on Optimization of Air Infiltration at Entrance of a Commercial Complex in Zhejiang Province
Authors: Yujie Zhao, Jiantao Weng
Abstract:
In the past decade, with the rapid development of China's economy, the purchasing power and physical demand of residents have been improved, which results in the vast emergence of public buildings like large shopping malls. However, the architects usually focus on the internal functions and streamlines of these buildings, ignoring the impact of the environment on the subjective feelings of building users. Only in Zhejiang province, the infiltration of cold air in winter frequently occurs at the entrance of sizeable commercial complex buildings that have been in operation, which will affect the environmental comfort of the building lobby and internal public spaces. At present, to reduce these adverse effects, it is usually adopted to add active equipment, such as setting air curtains to block air exchange or adding heating air conditioners. From the perspective of energy consumption, the infiltration of cold air into the entrance will increase the heat consumption of indoor heating equipment, which will indirectly cause considerable economic losses during the whole winter heating stage. Therefore, it is of considerable significance to explore the suitable entrance forms for improving the environmental comfort of commercial buildings and saving energy. In this paper, a commercial complex with apparent cold air infiltration problem in Hangzhou is selected as the research object to establish a model. The environmental parameters of the building entrance, including temperature, wind speed, and infiltration air volume, are obtained by Computational Fluid Dynamics (CFD) simulation, from which the heat consumption caused by the natural air infiltration in the winter and its potential economic loss is estimated as the objective metric. This study finally obtains the optimization direction of the building entrance form of the commercial complex by comparing the simulation results of other local commercial complex projects with different entrance forms. The conclusions will guide the entrance design of the same type of commercial complex in this area.Keywords: air infiltration, commercial complex, heat consumption, CFD simulation
Procedia PDF Downloads 13215977 Analyzing the Performance of Different Cost-Based Methods for the Corrective Maintenance of a System in Thermal Power Plants
Authors: Demet Ozgur-Unluakin, Busenur Turkali, S. Caglar Aksezer
Abstract:
Since the age of industrialization, maintenance has always been a very crucial element for all kinds of factories and plants. With today’s increasingly developing technology, the system structure of such facilities has become more complicated, and even a small operational disruption may return huge losses in profits for the companies. In order to reduce these costs, effective maintenance planning is crucial, but at the same time, it is a difficult task because of the complexity of systems. The most important aspect of correct maintenance planning is to understand the structure of the system, not to ignore the dependencies among the components and as a result, to model the system correctly. In this way, it will be better to understand which component improves the system more when it is maintained. Undoubtedly, proactive maintenance at a scheduled time reduces costs because the scheduled maintenance prohibits high losses in profits. But the necessity of corrective maintenance, which directly affects the situation of the system and provides direct intervention when the system fails, should not be ignored. When a fault occurs in the system, if the problem is not solved immediately and proactive maintenance time is awaited, this may result in increased costs. This study proposes various maintenance methods with different efficiency measures under corrective maintenance strategy on a subsystem of a thermal power plant. To model the dependencies between the components, dynamic Bayesian Network approach is employed. The proposed maintenance methods aim to minimize the total maintenance cost in a planning horizon, as well as to find the most appropriate component to be attacked on, which improves the system reliability utmost. Performances of the methods are compared under corrective maintenance strategy. Furthermore, sensitivity analysis is also applied under different cost values. Results show that all fault effect methods perform better than the replacement effect methods and this conclusion is also valid under different downtime cost values.Keywords: dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 12815976 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum
Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna
Abstract:
Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network
Procedia PDF Downloads 15815975 A Human Factors Approach to Workload Optimization for On-Screen Review Tasks
Authors: Christina Kirsch, Adam Hatzigiannis
Abstract:
Rail operators and maintainers worldwide are increasingly replacing walking patrols in the rail corridor with mechanized track patrols -essentially data capture on trains- and on-screen reviews of track infrastructure in centralized review facilities. The benefit is that infrastructure workers are less exposed to the dangers of the rail corridor. The impact is a significant change in work design from walking track sections and direct observation in the real world to sedentary jobs in the review facility reviewing captured data on screens. Defects in rail infrastructure can have catastrophic consequences. Reviewer performance regarding accuracy and efficiency of reviews within the available time frame is essential to ensure safety and operational performance. Rail operators must optimize workload and resource loading to transition to on-screen reviews successfully. Therefore, they need to know what workload assessment methodologies will provide reliable and valid data to optimize resourcing for on-screen reviews. This paper compares objective workload measures, including track difficulty ratings and review distance covered per hour, and subjective workload assessments (NASA TLX) and analyses the link between workload and reviewer performance, including sensitivity, precision, and overall accuracy. An experimental study was completed with eight on-screen reviewers, including infrastructure workers and engineers, reviewing track sections with different levels of track difficulty over nine days. Each day the reviewers completed four 90-minute sessions of on-screen inspection of the track infrastructure. Data regarding the speed of review (km/ hour), detected defects, false negatives, and false positives were collected. Additionally, all reviewers completed a subjective workload assessment (NASA TLX) after each 90-minute session and a short employee engagement survey at the end of the study period that captured impacts on job satisfaction and motivation. The results showed that objective measures for tracking difficulty align with subjective mental demand, temporal demand, effort, and frustration in the NASA TLX. Interestingly, review speed correlated with subjective assessments of physical and temporal demand, but to mental demand. Subjective performance ratings correlated with all accuracy measures and review speed. The results showed that subjective NASA TLX workload assessments accurately reflect objective workload. The analysis of the impact of workload on performance showed that subjective mental demand correlated with high precision -accurately detected defects, not false positives. Conversely, high temporal demand was negatively correlated with sensitivity and the percentage of detected existing defects. Review speed was significantly correlated with false negatives. With an increase in review speed, accuracy declined. On the other hand, review speed correlated with subjective performance assessments. Reviewers thought their performance was higher when they reviewed the track sections faster, despite the decline in accuracy. The study results were used to optimize resourcing and ensure that reviewers had enough time to review the allocated track sections to improve defect detection rates in accordance with the efficiency-thoroughness trade-off. Overall, the study showed the importance of a multi-method approach to workload assessment and optimization, combining subjective workload assessments with objective workload and performance measures to ensure that recommendations for work system optimization are evidence-based and reliable.Keywords: automation, efficiency-thoroughness trade-off, human factors, job design, NASA TLX, performance optimization, subjective workload assessment, workload analysis
Procedia PDF Downloads 12115974 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form
Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry
Abstract:
Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.Keywords: hydroxamic acid, related substances, UPLC, valaciclovir
Procedia PDF Downloads 24715973 Optimization of the Jatropha curcas Supply Chain as a Criteria for the Implementation of Future Collection Points in Rural Areas of Manabi-Ecuador
Authors: Boris G. German, Edward Jiménez, Sebastián Espinoza, Andrés G. Chico, Ricardo A. Narváez
Abstract:
The unique flora and fauna of The Galapagos Islands has leveraged a tourism-driven growth in the islands. Nonetheless, such development is energy-intensive and requires thousands of gallons of diesel each year for thermoelectric electricity generation. The needed transport of fossil fuels from the continent has generated oil spillages and affectations to the fragile ecosystem of the islands. The Zero Fossil Fuels initiative for The Galapagos proposed by the Ecuadorian government as an alternative to reduce the use of fossil fuels in the islands, considers the replacement of diesel in thermoelectric generators, by Jatropha curcas vegetable oil. However, the Jatropha oil supply cannot entirely cover yet the demand for electricity generation in Galapagos. Within this context, the present work aims to provide an optimization model that can be used as a selection criterion for approving new Jatropha Curcas collection points in rural areas of Manabi-Ecuador. For this purpose, existing Jatropha collection points in Manabi were grouped under three regions: north (7 collection points), center (4 collection points) and south (9 collection points). Field work was carried out in every region in order to characterize the collection points, to establish local Jatropha supply and to determine transportation costs. Data collection was complemented using GIS software and an objective function was defined in order to determine the profit associated to Jatropha oil production. The market price of both Jatropha oil and residual cake, were considered for the total revenue; whereas Jatropha price, transportation and oil extraction costs were considered for the total cost. The tonnes of Jatropha fruit and seed, transported from collection points to the extraction plant, were considered as variables. The maximum and minimum amount of the collected Jatropha from each region constrained the optimization problem. The supply chain was optimized using linear programming in order to maximize the profits. Finally, a sensitivity analysis was performed in order to find a profit-based criterion for the acceptance of future collection points in Manabi. The maximum profit reached a value of $ 4,616.93 per year, which represented a total Jatropha collection of 62.3 tonnes Jatropha per year. The northern region of Manabi had the biggest collection share (69%), followed by the southern region (17%). The criteria for accepting new Jatropha collection points in the rural areas of Manabi can be defined by the current maximum profit of the zone and by the variation in the profit when collection points are removed one at a time. The definition of new feasible collection points plays a key role in the supply chain associated to Jatropha oil production. Therefore, a mathematical model that assists decision makers in establishing new collection points while assuring profitability, contributes to guarantee a continued Jatropha oil supply for Galapagos and a sustained economic growth in the rural areas of Ecuador.Keywords: collection points, Jatropha curcas, linear programming, supply chain
Procedia PDF Downloads 43315972 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 11815971 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing
Authors: Huan Ting Liao
Abstract:
In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning
Procedia PDF Downloads 24