Search results for: charging methods
14932 Research of Database Curriculum Construction under the Environment of Massive Open Online Courses
Authors: Wang Zhanquan, Yang Zeping, Gu Chunhua, Zhu Fazhi, Guo Weibin
Abstract:
Recently, Massive Open Online Courses (MOOCs) are becoming the new trend of education. There are many problems under the environment of Database Principle curriculum teaching process in MOOCs, such as teaching ideas and theories which are out of touch with the reality, how to carry out the technical teaching and interactive practice in the MOOCs environment, thus the methods of database course under the environment of MOOCs are proposed. There are three processes to deal with problem solving in the research, which are problems proposed, problems solved, and inductive analysis. The present research includes the design of teaching contents, teaching methods in classroom, flipped classroom teaching mode under the environment of MOOCs, learning flow method and large practice homework. The database designing ability is systematically improved based on the researching methods.Keywords: problem solving-driven, MOOCs, teaching art, learning flow;
Procedia PDF Downloads 36314931 Drone Classification Using Classification Methods Using Conventional Model With Embedded Audio-Visual Features
Authors: Hrishi Rakshit, Pooneh Bagheri Zadeh
Abstract:
This paper investigates the performance of drone classification methods using conventional DCNN with different hyperparameters, when additional drone audio data is embedded in the dataset for training and further classification. In this paper, first a custom dataset is created using different images of drones from University of South California (USC) datasets and Leeds Beckett university datasets with embedded drone audio signal. The three well-known DCNN architectures namely, Resnet50, Darknet53 and Shufflenet are employed over the created dataset tuning their hyperparameters such as, learning rates, maximum epochs, Mini Batch size with different optimizers. Precision-Recall curves and F1 Scores-Threshold curves are used to evaluate the performance of the named classification algorithms. Experimental results show that Resnet50 has the highest efficiency compared to other DCNN methods.Keywords: drone classifications, deep convolutional neural network, hyperparameters, drone audio signal
Procedia PDF Downloads 10414930 Unsupervised Domain Adaptive Text Retrieval with Query Generation
Authors: Rui Yin, Haojie Wang, Xun Li
Abstract:
Recently, mainstream dense retrieval methods have obtained state-of-the-art results on some datasets and tasks. However, they require large amounts of training data, which is not available in most domains. The severe performance degradation of dense retrievers on new data domains has limited the use of dense retrieval methods to only a few domains with large training datasets. In this paper, we propose an unsupervised domain-adaptive approach based on query generation. First, a generative model is used to generate relevant queries for each passage in the target corpus, and then the generated queries are used for mining negative passages. Finally, the query-passage pairs are labeled with a cross-encoder and used to train a domain-adapted dense retriever. Experiments show that our approach is more robust than previous methods in target domains that require less unlabeled data.Keywords: dense retrieval, query generation, unsupervised training, text retrieval
Procedia PDF Downloads 7314929 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method
Authors: Murat Demir Aydin, Elanur Celebi
Abstract:
Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)
Procedia PDF Downloads 32114928 Solving Linear Systems Involved in Convex Programming Problems
Authors: Yixun Shi
Abstract:
Many interior point methods for convex programming solve an (n+m)x(n+m)linear system in each iteration. Many implementations solve this system in each iteration by considering an equivalent mXm system (4) as listed in the paper, and thus the job is reduced into solving the system (4). However, the system(4) has to be solved exactly since otherwise the error would be entirely passed onto the last m equations of the original system. Often the Cholesky factorization is computed to obtain the exact solution of (4). One Cholesky factorization is to be done in every iteration, resulting in higher computational costs. In this paper, two iterative methods for solving linear systems using vector division are combined together and embedded into interior point methods. Instead of computing one Cholesky factorization in each iteration, it requires only one Cholesky factorization in the entire procedure, thus significantly reduces the amount of computation needed for solving the problem. Based on that, a hybrid algorithm for solving convex programming problems is proposed.Keywords: convex programming, interior point method, linear systems, vector division
Procedia PDF Downloads 40214927 Bi-Dimensional Spectral Basis
Authors: Abdelhamid Zerroug, Mlle Ismahene Sehili
Abstract:
Spectral methods are usually applied to solve uni-dimensional boundary value problems. With the advantage of the creation of multidimensional basis, we propose a new spectral method for bi-dimensional problems. In this article, we start by creating bi-spectral basis by different ways, we developed also a new relations to determine the expressions of spectral coefficients in different partial derivatives expansions. Finally, we propose the principle of a new bi-spectral method for the bi-dimensional problems.Keywords: boundary value problems, bi-spectral methods, bi-dimensional Legendre basis, spectral method
Procedia PDF Downloads 39514926 Fish Is Back but Fishers Are Out: The Dilemma of the Education Methods Adapted for Co-management of the Fishery Resource
Authors: Namubiru Zula, Janice Desire Busingue
Abstract:
Pro-active educational approaches have lately been adapted Globally in the Conservation of Natural Resources. This led to the introduction of the co-management system, which worked for some European Countries on the conservation of sharks and other Natural resources. However, this approach has drastically failed in the Fishery sector on Lake Victoria; and the punitive education approach has been re-instated. Literature is readily available about the punitive educational approaches and scanty with the pro-active one. This article analyses the pro-active approach adopted by the Department of Fisheries for the orientation of BMU leaders in a co-management system. The study is interpreted using the social constructivist lens for co-management of the fishery resource to ensure that fishers are also back to fishing sustainably. It highlights some of the education methods used, methodological challenges that included the power and skills gap of the facilitators and program designers, and some implications to practice.Keywords: beach management units, fishers, education methods, proactive approach, punitive approach
Procedia PDF Downloads 12314925 Analyzing the Performance of Different Cost-Based Methods for the Corrective Maintenance of a System in Thermal Power Plants
Authors: Demet Ozgur-Unluakin, Busenur Turkali, S. Caglar Aksezer
Abstract:
Since the age of industrialization, maintenance has always been a very crucial element for all kinds of factories and plants. With today’s increasingly developing technology, the system structure of such facilities has become more complicated, and even a small operational disruption may return huge losses in profits for the companies. In order to reduce these costs, effective maintenance planning is crucial, but at the same time, it is a difficult task because of the complexity of systems. The most important aspect of correct maintenance planning is to understand the structure of the system, not to ignore the dependencies among the components and as a result, to model the system correctly. In this way, it will be better to understand which component improves the system more when it is maintained. Undoubtedly, proactive maintenance at a scheduled time reduces costs because the scheduled maintenance prohibits high losses in profits. But the necessity of corrective maintenance, which directly affects the situation of the system and provides direct intervention when the system fails, should not be ignored. When a fault occurs in the system, if the problem is not solved immediately and proactive maintenance time is awaited, this may result in increased costs. This study proposes various maintenance methods with different efficiency measures under corrective maintenance strategy on a subsystem of a thermal power plant. To model the dependencies between the components, dynamic Bayesian Network approach is employed. The proposed maintenance methods aim to minimize the total maintenance cost in a planning horizon, as well as to find the most appropriate component to be attacked on, which improves the system reliability utmost. Performances of the methods are compared under corrective maintenance strategy. Furthermore, sensitivity analysis is also applied under different cost values. Results show that all fault effect methods perform better than the replacement effect methods and this conclusion is also valid under different downtime cost values.Keywords: dynamic Bayesian networks, maintenance, multi-component systems, reliability
Procedia PDF Downloads 12814924 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form
Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry
Abstract:
Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.Keywords: hydroxamic acid, related substances, UPLC, valaciclovir
Procedia PDF Downloads 24614923 Evaluating Models Through Feature Selection Methods Using Data Driven Approach
Authors: Shital Patil, Surendra Bhosale
Abstract:
Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE
Procedia PDF Downloads 11814922 Energy-Saving Methods and Principles of Energy-Efficient Concept Design in the Northern Hemisphere
Authors: Yulia A. Kononova, Znang X. Ning
Abstract:
Nowadays, architectural development is getting faster and faster. Nevertheless, modern architecture often does not meet all the points, which could help our planet to get better. As we know, people are spending an enormous amount of energy every day of their lives. Because of the uncontrolled energy usage, people have to increase energy production. As energy production process demands a lot of fuel sources, it courses a lot of problems such as climate changes, environment pollution, animals’ distinction, and lack of energy sources also. Nevertheless, nowadays humanity has all the opportunities to change this situation. Architecture is one of the most popular fields where it is possible to apply new methods of saving energy or even creating it. Nowadays we have kinds of buildings, which can meet new willing. One of them is energy effective buildings, which can save or even produce energy, combining several energy-saving principles. The main aim of this research is to provide information that helps to apply energy-saving methods while designing an environment-friendly building. The research methodology requires gathering relevant information from literature, building guidelines documents and previous research works in order to analyze it and sum up into a material that can be applied to energy-efficient building design. To mark results it should be noted that the usage of all the energy-saving methods applied to a design project of building results in ultra-low energy buildings that require little energy for space heating or cooling. As a conclusion it can be stated that developing methods of passive house design can decrease the need of energy production, which is an important issue that has to be solved in order to save planet sources and decrease environment pollution.Keywords: accumulation, energy-efficient building, storage, superinsulation, passive house
Procedia PDF Downloads 26214921 Improving Detection of Illegitimate Scores and Assessment in Most Advantageous Tenders
Authors: Hao-Hsi Tseng, Hsin-Yun Lee
Abstract:
The Most Advantageous Tender (MAT) has been criticized for its susceptibility to dictatorial situations and for its processing of same score, same rank issues. This study applies the four criteria from Arrow's Impossibility Theorem to construct a mechanism for revealing illegitimate scores in scoring methods. While commonly be used to improve on problems resulting from extreme scores, ranking methods hide significant defects, adversely affecting selection fairness. To address these shortcomings, this study relies mainly on the overall evaluated score method, using standardized scores plus normal cumulative distribution function conversion to calculate the evaluation of vender preference. This allows for free score evaluations, which reduces the influence of dictatorial behavior and avoiding same score, same rank issues. Large-scale simulations confirm that this method outperforms currently used methods using the Impossibility Theorem.Keywords: Arrow’s impossibility theorem, cumulative normal distribution function, most advantageous tender, scoring method
Procedia PDF Downloads 46214920 Virtual Reality Learning Environment in Embryology Education
Authors: Salsabeel F. M. Alfalah, Jannat F. Falah, Nadia Muhaidat, Amjad Hudaib, Diana Koshebye, Sawsan AlHourani
Abstract:
Educational technology is changing the way how students engage and interact with learning materials. This improved the learning process amongst various subjects. Virtual Reality (VR) applications are considered one of the evolving methods that have contributed to enhancing medical education. This paper utilizes VR to provide a solution to improve the delivery of the subject of Embryology to medical students, and facilitate the teaching process by providing a useful aid to lecturers, whilst proving the effectiveness of this new technology in this particular area. After evaluating the current teaching methods and identifying students ‘needs, a VR system was designed that demonstrates in an interactive fashion the development of the human embryo from fertilization to week ten of intrauterine development. This system aims to overcome some of the problems faced by the students’ in the current educational methods, and to increase the efficacy of the learning process.Keywords: virtual reality, student assessment, medical education, 3D, embryology
Procedia PDF Downloads 19114919 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: data mining, ensemble, radial basis function, support vector machine, accuracy
Procedia PDF Downloads 24814918 Using the Bootstrap for Problems Statistics
Authors: Brahim Boukabcha, Amar Rebbouh
Abstract:
The bootstrap method based on the idea of exploiting all the information provided by the initial sample, allows us to study the properties of estimators. In this article we will present a theoretical study on the different methods of bootstrapping and using the technique of re-sampling in statistics inference to calculate the standard error of means of an estimator and determining a confidence interval for an estimated parameter. We apply these methods tested in the regression models and Pareto model, giving the best approximations.Keywords: bootstrap, error standard, bias, jackknife, mean, median, variance, confidence interval, regression models
Procedia PDF Downloads 38014917 Raman, Atomic Force Microscopy and Mass Spectrometry for Isotopic Ratios Methods Used to Investigate Human Dentine and Enamel
Authors: Nicoleta Simona Vedeanu, Rares Stiufiuc, Dana Alina Magdas
Abstract:
A detailed knowledge of the teeth structure is mandatory to understand and explain the defects and the dental pathology, but especially to take a correct decision regarding dental prophylaxis and treatment. The present work is an alternative study to the traditional investigation methods used in dentistry, a study based on the use of modern, sensitive physical methods to investigate human enamel and dentin. For the present study, several teeth collected from patients of different ages were used for structural and dietary investigation. The samples were investigated by Raman spectroscopy for the molecular structure analysis of dentin and enamel, atomic force microscopy (AFM) to view the dental topography at the micrometric size and mass spectrometry for isotopic ratios as a fingerprint of patients’ personal diet. The obtained Raman spectra and their interpretation are in good correlation with the literature and may give medical information by comparing affected dental structures with healthy ones. AFM technique gave us the possibility to study in details the dentin and enamel surface to collect information about dental hardness or dental structural changes. δ¹³C values obtained for the studied samples can be classified in C4 category specific to young people and children diet (sweets, cereals, juices, pastry). The methods used in this attempt furnished important information about dentin and enamel structure and dietary habits and each of the three proposed methods can be extended at a larger level in the study of the teeth structure.Keywords: AFM, dentine, enamel, Raman spectroscopy
Procedia PDF Downloads 14514916 Assessment of Residual Stress on HDPE Pipe Wall Thickness
Authors: D. Sersab, M. Aberkane
Abstract:
Residual stresses, in high-density polyethylene (HDPE) pipes, result from a nonhomogeneous cooling rate that occurs between the inner and outer surfaces during the extrusion process in manufacture. Most known methods of measurements to determine the magnitude and profile of the residual stresses in the pipe wall thickness are layer removal and ring slitting method. The combined layer removal and ring slitting methods described in this paper involves measurement of the circumferential residual stresses with minimal local disturbance. The existing methods used for pipe geometry (ring slitting method) gives a single residual stress value at the bore. The layer removal method which is used more in flat plate specimen is implemented with ring slitting method. The method permits stress measurements to be made directly at different depth in the pipe wall and a well-defined residual stress profile was consequently obtained.Keywords: residual stress, layer removal, ring splitting, HDPE, wall thickness
Procedia PDF Downloads 33814915 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 14114914 Dynamic Construction Site Layout Using Ant Colony Optimization
Authors: Yassir AbdelRazig
Abstract:
Evolutionary optimization methods such as genetic algorithms have been used extensively for the construction site layout problem. More recently, ant colony optimization algorithms, which are evolutionary methods based on the foraging behavior of ants, have been successfully applied to benchmark combinatorial optimization problems. This paper proposes a formulation of the site layout problem in terms of a sequencing problem that is suitable for solution using an ant colony optimization algorithm. In the construction industry, site layout is a very important planning problem. The objective of site layout is to position temporary facilities both geographically and at the correct time such that the construction work can be performed satisfactorily with minimal costs and improved safety and working environment. During the last decade, evolutionary methods such as genetic algorithms have been used extensively for the construction site layout problem. This paper proposes an ant colony optimization model for construction site layout. A simple case study for a highway project is utilized to illustrate the application of the model.Keywords: ant colony, construction site layout, optimization, genetic algorithms
Procedia PDF Downloads 38214913 Effect of Dehydration Methods of the Proximate Composition, Mineral Content and Functional Properties of Starch Flour Extracted from Maize
Authors: Olakunle M. Makanjuola, Adebola Ajayi
Abstract:
Effect of the dehydrated method on proximate, functional and mineral properties of corn starch was evaluated. The study was carried and to determine the proximate, functional and mineral properties of corn starch produced using three different drying methods namely (sun) (oven) and (cabinet) drying methods. The corn starch was obtained by cleaning, steeping, milling, sieving, dewatering and drying corn starch was evaluated for proximate composition, functional properties, and mineral properties to determine the nutritional properties, moisture, crude protein, crude fat, ash, and carbohydrate were in the range of 9.35 to 12.16, 6.5 to 10.78 1.08 to 2.5, 1.08 to 2.5, 4.0 to 5.2, 69.58 to 75.8% respectively. Bulk density range between 0.610g/dm3 to 0.718 g/dm3, water, and oil absorption capacities range between 116.5 to 117.25 and 113.8 to 117.25 ml/g respectively. Swelling powder had value varying from 1.401 to 1.544g/g respectively. The results indicate that the cabinet method had the best result item of the quality attribute.Keywords: starch flour, maize, dehydration, cabinet dryer
Procedia PDF Downloads 23814912 A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column
Authors: Nima Khosravi
Abstract:
This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.Keywords: beam column, genetic algorithm, particle swarm optimization, sequential quadratic programming, simulated annealing
Procedia PDF Downloads 38614911 On Block Vandermonde Matrix Constructed from Matrix Polynomial Solvents
Authors: Malika Yaici, Kamel Hariche
Abstract:
In control engineering, systems described by matrix fractions are studied through properties of block roots, also called solvents. These solvents are usually dealt with in a block Vandermonde matrix form. Inverses and determinants of Vandermonde matrices and block Vandermonde matrices are used in solving problems of numerical analysis in many domains but require costly computations. Even though Vandermonde matrices are well known and method to compute inverse and determinants are many and, generally, based on interpolation techniques, methods to compute the inverse and determinant of a block Vandermonde matrix have not been well studied. In this paper, some properties of these matrices and iterative algorithms to compute the determinant and the inverse of a block Vandermonde matrix are given. These methods are deducted from the partitioned matrix inversion and determinant computing methods. Due to their great size, parallelization may be a solution to reduce the computations cost, so a parallelization of these algorithms is proposed and validated by a comparison using algorithmic complexity.Keywords: block vandermonde matrix, solvents, matrix polynomial, matrix inverse, matrix determinant, parallelization
Procedia PDF Downloads 23914910 Between Efficacy and Danger: Narratives of Female University Students about Emergency Contraception Methods
Authors: Anthony Idowu Ajayi, Ezebunwa Ethelbert Nwokocha, Wilson Akpan, Oladele Vincent Adeniyi
Abstract:
Studies on emergency contraception (EC) mostly utilise quantitative methods and focus on medically approved drugs for the prevention of unwanted pregnancies. This methodological bias necessarily obscures insider perspectives on sexual behaviour, particularly on why specific methods are utilized by women who seek to prevent unplanned pregnancies. In order to privilege this perspective, with a view to further enriching the discourse and policy on the prevention and management of unplanned pregnancies, this paper brings together the findings from several focus groups and in-depth interviews conducted amongst unmarried female undergraduate students in two Nigerian universities. The study found that while the research participants had good knowledge of the consequences of unprotected sexual intercourses - with abstinence and condom widely used - participants’ willingness to rely only on medically sound measures to prevent unwanted pregnancies was not always mediated by such knowledge. Some of the methods favored by participants appeared to be those commonly associated with people of low socio-economic status in the society where the study was conducted. Medically unsafe concoctions, some outright dangerous, were widely believed to be efficacious in preventing unwanted pregnancy. Furthermore, respondents’ narratives about their sexual behaviour revealed that inadequate sex education, socio-economic pressures, and misconceptions about the efficacy of “crude” emergency contraception methods were all interrelated. The paper therefore suggests that these different facets of the unplanned pregnancy problem should be the focus of intervention.Keywords: unplanned pregnancy, unsafe abortion, emergency contraception, concoctions
Procedia PDF Downloads 42414909 Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques
Authors: Sachin Singh, Thomas Penzel, Dinesh Nandan
Abstract:
Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5.Keywords: neural network, AHI, statistical methods, autoregressive models
Procedia PDF Downloads 11914908 Producing TPU/Propolis Nanofibrous Membrane as Wound Dressing
Authors: Yasin Akgül, Yusuf Polat, Emine Canbay, Ali Kılıç
Abstract:
Wound dressings have strategically and economic importance considering increase of chronic wounds in the world. In this study, TPU nanofibrous membranes containing propolis as wound dressing are produced by two different methods. Firstly, TPU solution and propolis extract were mixed and this solution was electrospun. The other method is that TPU/propolis blend was centrifugally spun. Properties of nanofibrous membranes obtained by these methods were compared. While realizing the experiments, both systems were optimized to produce nanofibers with nearly same average fiber diameter.Keywords: nanofiber, wound dressing, electrospinning, centrifugal spinning
Procedia PDF Downloads 45514907 Hybrid Subspace Approach for Time Delay Estimation in MIMO Systems
Authors: Mojtaba Saeedinezhad, Sarah Yousefi
Abstract:
In this paper, we present a hybrid subspace approach for Time Delay Estimation (TDE) in multivariable systems. While several methods have been proposed for time delay estimation in SISO systems, delay estimation in MIMO systems were always a big challenge. In these systems the existing TDE methods have significant limitations because most of procedures are just based on system response estimation or correlation analysis. We introduce a new hybrid method for TDE in MIMO systems based on subspace identification and explicit output error method; and compare its performance with previously introduced procedures in presence of different noise levels and in a statistical manner. Then the best method is selected with multi objective decision making technique. It is shown that the performance of new approach is much better than the existing methods, even in low signal-to-noise conditions.Keywords: system identification, time delay estimation, ARX, OE, merit ratio, multi variable decision making
Procedia PDF Downloads 34614906 Big Data Analysis on the Development of Jinan’s Consumption Centers under the Influence of E-Commerce
Authors: Hang Wang, Xiaoming Gao
Abstract:
The rapid development of e-commerce has significantly transformed consumer behavior and urban consumption patterns worldwide. This study explores the impact of e-commerce on the development and spatial distribution of consumption centers, with a particular focus on Jinan City, China. Traditionally, urban consumption centers are defined by physical commercial spaces, such as shopping malls and markets. However, the rise of e-commerce has introduced a shift towards virtual consumption hubs, with a corresponding impact on physical retail locations. Utilizing Gaode POI (Point of Interest) data, this research aims to provide a comprehensive analysis of the spatial distribution of consumption centers in Jinan, comparing e-commerce-driven virtual consumption hubs with traditional physical consumption centers. The study methodology involves gathering and analyzing POI data, focusing on logistics distribution for e-commerce activities and mobile charging point locations to represent offline consumption behavior. A spatial clustering technique is applied to examine the concentration of commercial activities and to identify emerging trends in consumption patterns. The findings reveal a clear differentiation between e-commerce and physical consumption centers in Jinan. E-commerce activities are dispersed across a wider geographic area, correlating closely with residential zones and logistics centers, while traditional consumption hubs remain concentrated around historical and commercial areas such as Honglou and the old city center. Additionally, the research identifies an ongoing transition within Jinan’s consumption landscape, with online and offline retail coexisting, though at different spatial and functional levels. This study contributes to urban planning by providing insights into how e-commerce is reshaping consumption behaviors and spatial structures in cities like Jinan. By leveraging big data analytics, the research offers a valuable tool for urban designers and planners to adapt to the evolving demands of digital commerce and to optimize the spatial layout of city infrastructure to better serve the needs of modern consumers.Keywords: big data, consumption centers, e-commerce, urban planning, jinan
Procedia PDF Downloads 2014905 Automatic Diagnosis of Electrical Equipment Using Infrared Thermography
Authors: Y. Laib Dit Leksir, S. Bouhouche
Abstract:
Analysis and processing of data bases resulting from infrared thermal measurements made on the electrical installation requires the development of new tools in order to obtain correct and additional information to the visual inspections. Consequently, the methods based on the capture of infrared digital images show a great potential and are employed increasingly in various fields. Although, there is an enormous need for the development of effective techniques to analyse these data base in order to extract relevant information relating to the state of the equipments. Our goal consists in introducing recent techniques of modeling based on new methods, image and signal processing to develop mathematical models in this field. The aim of this work is to capture the anomalies existing in electrical equipments during an inspection of some machines using A40 Flir camera. After, we use binarisation techniques in order to select the region of interest and we make comparison between these methods of thermal images obtained to choose the best one.Keywords: infrared thermography, defect detection, troubleshooting, electrical equipment
Procedia PDF Downloads 47614904 Advanced Particle Characterisation of Suspended Sediment in the Danube River Using Automated Imaging and Laser Diffraction
Authors: Flóra Pomázi, Sándor Baranya, Zoltán Szalai
Abstract:
A harmonized monitoring of the suspended sediment transport along such a large river as the world’s most international river, the Danube River, is a rather challenging task. The traditional monitoring method in Hungary is obsolete but using indirect measurement devices and techniques like optical backscatter sensors (OBS), laser diffraction or acoustic backscatter sensors (ABS) could provide a fast and efficient alternative option of direct methods. However, these methods are strongly sensitive to the particle characteristics (i.e. particle shape, particle size and mineral composition). The current method does not provide sufficient information about particle size distribution, mineral analysis is rarely done, and the shape of the suspended sediment particles have not been examined yet. The aims of the study are (1) to determine the particle characterisation of suspended sediment in the Danube River using advanced particle characterisation methods as laser diffraction and automated imaging, and (2) to perform a sensitivity analysis of the indirect methods in order to determine the impact of suspended particle characteristics. The particle size distribution is determined by laser diffraction. The particle shape and mineral composition analysis is done by the Morphologi G3ID image analyser. The investigated indirect measurement devices are the LISST-Portable|XR, the LISST-ABS (Sequoia Inc.) and the Rio Grande 1200 kHz ADCP (Teledyne Marine). The major findings of this study are (1) the statistical shape of the suspended sediment particle - this is the first research in this context, (2) the actualised particle size distribution – that can be compared to historical information, so that the morphological changes can be tracked, (3) the actual mineral composition of the suspended sediment in the Danube River, and (4) the reliability of the tested indirect methods has been increased – based on the results of the sensitivity analysis and the previous findings.Keywords: advanced particle characterisation, automated imaging, indirect methods, laser diffraction, mineral composition, suspended sediment
Procedia PDF Downloads 14614903 Comparative Analysis of Glycated Hemoglobin (hba1c) Between HPLC and Immunoturbidimetry Method in Type II Diabetes Mellitus Patient
Authors: Intanri Kurniati, Raja Iqbal Mulya Harahap, Agustyas Tjiptaningrum, Reni Zuraida
Abstract:
Background: Diabetes mellitus is still increasing and has become a health and social burden in the world. It is known that glycation among various proteins is increased in diabetic patients compared with non-diabetic subjects. Some of these glycated proteins are suggested to be involved in the development and progression of chronic diabetic complications. Among these glycated proteins, glycated hemoglobin (HbA1C) is commonly used as the gold standard index of glycemic control in the clinical setting. HbA1C testing has some methods, and the most commonly used is immunoturbidimetry. This research aimed to compare the HbA1c level between immunoturbidimetry and HbA1C level in T2DM patients. Methods: This research involves 77 patients from Abd Muluk Hospital Bandar Lampung; the patient was asked for consent in this research, then underwent phlebotomy to be tested for HbA1C; the sample was then examined for HbA1C with Turbidimetric Inhibition Immunoassay (TINIA) and High-Performance Liquid Chromatography (HPLC) method. Result: Mean± SD of the samples with the TINIA method was 9.2±1,2; meanwhile, the level HbA1C with the HPLC method is 9.6±1,2. The t-test showed no significant difference between the group subjects. (p<0.05). It was proposed that the two methods have high suitability in testing, and both are eligibly used for the patient. Discussion: There was no significant difference among research subjects, indicating that the high conformity of the two methods is suitable to be used for monitoring patients clinically. Conclusion: There is increasing in HbA1C level in a patient with T2DM measured with HPLC and or Turbidimetric Inhibition Immunoassay (TINIA) method, and there were no significant differences among those methods.Keywords: diabetes mellitus, glycated albumin, HbA1C, HPLC, immunoturbidimetry
Procedia PDF Downloads 99